Cybersecurity & Safety / Holding Companies Accountable
May 10, 2016

Facebook dislikeIn April, the New York Times provided Facebook with evidence of 7 cases of groups  offering military-grade weapons for sale ranging from handguns and grenades to heavy machine guns and guided anti-aircraft missiles.  Facebook shut down 6.  How did they get on Facebook in the first place?  The findings were based on a study by the private consultancy Armament Research Services about arms trafficking on social media in Libya, along with reporting by The New York Times on similar trafficking in Syria, Iraq and Yemen.  Facebook must do a much better job making sure it's platform is not used in ways which promote terror and crimes.

Paper policies are no substitute for proactive policing

All of these violate Facebook’s policies, which since January has forbidden the facilitation of private sales of firearms and other weapons, according to Monika Bickert, a former federal prosecutor who is responsible for developing and enforcing the company’s content standards.  And yet they are still happening on the site.  Stop putting the rush to scale ahead of protecting people's safety.


Greg Kovacic, CFA

 

Tagged with: facebook
Cybersecurity & Safety / Government Regulations / Holding Companies Accountable
May 10, 2016

china flag button v2baidu logoRegardless of your personal opinion of the Chinese government, you have to admire how quickly they can move when they want to.  Only 1 week after launching an investigation into the online advertising practices of Internet giant Baidu, regulators ordered it to revamp the way it handles advertising results in online searches.   Baidu's response: “Baidu should provide better and more reliable search services,” said Xiang Hailong, Baidu’s senior executive in charge of search, in the statement.   The company said Mr. Wei’s death “prompted all Baidu employees to re-examine the responsibilities of a search company.”  The same response from Google would have been along the lines of, "the government will destroy free speech and the internet by trying to regulate our business."  :-)

The China investigation was launched after Wei Zexi, a college student with cancer, had taken a therapy found through an online advertisement on Baidu, which turned out to be of no scientific validity.

China’s Cyberspace Administration said Baidu must change its system by the end of May, by attaching “eye-catching markers,” as well as risk warnings, to all paid results. It also said ads must comprise only 30% of results displayed on a page.

Investigators found Baidu’s keyword pay-for-placement services influenced Mr. Wei’s medical choices and Baidu’s search algorithm influences fairness and objectivity of results, which misleads Internet users.  “It’s not permitted to only consider the amount of money that has been paid,” the investigators said.  Google take notes.


Greg Kovacic, CFA

Tagged with: baidu, china
Holding Companies Accountable
May 10, 2016

china flag button v2didi kuaidi

 

 

 

 

In Shenzhen, China, a female teacher was robbed and murdered by a driver using the ride-hailing app Didi Kuaidi.  How did it respond?  Like a company which places Digital Trust at the heart of what it does.  Didi Kuadi's swift reponse included suspending drivers and rolling out new security measures:

  • Will add emergency button to app to allow passengers in distress to call for help
  • Face-recognition system aimed at letting passengers ensure driver is same person registered on Didi Kuaidi platform
  • Let passengers share estimated time of arrival with family and friends
  • Alert both passengers and Didi’s platform if driver significantly deviates from his or her route
  • Suspended portion of 14m registered drivers; media reported 8,000 drivers in Shenzhen were suspended for violating Didi’s company policies. Didi said it reviews its drivers every month and suspends those who have had driver violations or failed updated criminal checks.

“This next level is about stopping the crime from happening,” said Stephen Zhu, vice president of Didi Kuaidi.  The same statement from Uber would have gone something like this, "It's sad this horrible tragedy happened.  Uber takes it's rider's safety seriously."  It took a murder in India to get Uber to launch a panic button there.  To be fair, these Digital Trust actions should have been implemented before someone died.

An Uber Technologies Inc. spokeswoman confirmed media reports the driver in Shenzhen previously applied to register on Uber, but had failed to clear Uber’s screening process.  What does it do differently in China that it should be doing in the US?


Greg Kovacic, CFA

Tagged with: china, didi-kuaidi, uber
Holding Companies Accountable / Laws & Regulations
May 10, 2016

Uber logoRide-sharing apps Uber and Lyft have again shown they put their rush to scale ahead of protecting users' safety.  In December 2015, Austin, Texas issued regulations requiring ride-sharing drivers (like those "working" for Uber and Lyft) to have their fingerprints scanned by Feb. 1, 2017 and under go fingerprint-based background checks.  How did Uber and Lyft respond?  They spent at least US$8.6m on what some called an "aggressive" local marketing campaign to overturn these regulations, flooding the city with fliers and sending door-to-door canvassers and text messages to residents.

The people won, and like spoiled children who spit out their pacifier when they do not get what they want, Uber and Lyft responded by halting operations in Austin Texas.  And the Uber and Lyft drivers lose their income.  But they have played this game before trying to pressure local regulators and lost.  Uber just does not really have any leverage.    No matter how hard Uber and Lyft try, app-based car services do not provide a distinct service from taxis, so are not going to be regulated differently.  Uber is now regulated by laws in 30 states as well as hundreds of cities around the world.  Yet it keeps fighting a losing battle.  Regulators in Portland, Oregon, Las Vegas, Nevada and Miami, Florida did not back down to Uber's pressure to skip fingerprinting, Uber spit out its pacifier, and ultimately resumed operations in those cities.

Uber screens drivers using Checkr.  It runs security checks identifying addresses associated with a person in the past 7 years, and matches any convictions during that time.  Lyft uses similar background-check service from Sterling Infosystems.  Many taxi companies use Live Scan, which uses driver's fingerprints to check FBI and state databases looking for a match.

Uber and Lyft argue their background checks are better and more efficient than fingerprint checks.  The data shows this is simply not true.  The real issue is good quality background checks cost money and time and impede one's rush to scale.  Live Scan is costlier and takes longer to process.   The rush to scale it put before the protecting users' safety.

And here's the funny part.  Uber already mandates fingerprint scans in New York, where all of its drivers must apply for a TLC license.  Houston started requiring ride-sharing drivers to have fingerprint scans in late 2014.  Lyft paused operations in Houston after fingerprints were made mandatory and hasn’t resumed since.  Uber continues to operate in areas surrounding Austin and permits its drivers to drop passengers off inside city limits.

Chris Nakutis, Uber’s general manager in Austin, said in an emailed statement to the Wall Street Journal. “Disappointment does not begin to describe how we feel about shutting down operations in Austin.  We hope the city council will reconsider their ordinance so we can work together to make the streets of Austin a safer place for everyone.”  Empty PR-speak for "we put our rush to scale ahead of our riders' safety".  And of course Uber's and Lyft's approximately 10,000 "non-employee" (nod nod wink wink) drivers in Austin are out the lost income.  Good move Uber and Lyft.

And if you needed any more proof Uber puts its rush to scale ahead of user safety, in April, it last agreed to pay US$25m to settle a lawsuit brought by district attorneys of San Francisco and Los Angeles, who sued Uber for misleading consumers on background checks of its drivers.  California regulators found evidence Uber failed to screen out 25 drivers with criminal records, including convictions for kidnapping and murder.

Last month, regional police in Canada charged an Uber driver with sexual assault, after a 34-year-old woman said the driver took her to an empty parking lot and assaulted her.  Uber bad.

User safety must come before the rush to scale.  It's that simple.


Greg Kovacic, CFA

Tagged with: lyft, uber
Government Regulations / Holding Companies Accountable / Protecting My Online Identity, Reputation & Privacy
March 15, 2016

Google v EUIn October 2015, Alphabet, Google's renamed parent company, dropped its original “Don’t Be Evil” mantra from its code of conduct.  “Employees of Alphabet and its subsidiaries and controlled affiliates should do the right thing—follow the law, act honorably, and treat each other with respect,” the new code reads, noticeably dropping the famous motto.  Yes, Google, it would be nice if as a corporation you actually "follow the law, act honorably, and treat each other with respect".  Dear Google, people first, not profits.


EU 1 vs. Google 0

On May 12, 2014, the EU shot and scored its first goal against Google in May 2014 when the EU's highest court in effect created the "right to be forgotten" in online search results for EU nationals and residents.   Google fought this in the courts every step of the way, until it lost.  The European Court of Justice ruled any EU citizen can request global search engines remove links to items about them from search queries which are "inadequate, irrelevant or no longer relevant, or excessive."  The standard is lower for non-public people.  The original content remains on the original publishing website, only the link is removed from search results.

Google interpreted this ruling to only apply to Google's EU domains (e.g. google.de, google.fr, google.co.uk, google.es, etc.) and not google.com or any other google search sites outside the EU.

EU privacy regulators disagreed and are pushing for the right to be forgotten to be applied to all of Google's search domains.

Google argues if it agrees to alter all of its global search requests on behalf of Europeans, other countries (e.g. China and Russia) could demand Google change its practices to meet their own national laws.  Peter Fleischer, Google's global privacy counsel, wrote in a blog last year, “In the end, the [i]nternet would only be as free as the world’s least free place.”   Scare tactics if there ever was one.  Does Google really think it is above the laws of any country?

Google is purely motivated by maintaining its ability to monetize people's data.  The EU is purely motivated by protecting its citizen's right to privacy.  Thus the EU-Google game began.

EU 2 vs. Google 0

On August 18, 2015 in the UK, an individual asked Google to remove links under the EU's 'right to be forgotten, and Google agreed it met its criteria and removed those links.  Then Google notified the news organization about having removed such links, and presto like magic, new 'news' stories were created about removing the original links.  The individual subsequently requested Google remove those new links too.  Google refused.  Why?  Google said the new stories were about an important news event.  If the original news stories satisfied the EU's 'right to be forgotten' criteria and were removed from the search results, then any subsequent story about the removal of those same links would also have to satisfy the EU's 'right to be forgotten'.  That is unless you are Google and failed logic.  Google uses popular phrases like "public interest" and "free speech" to distract people from its real agenda - no accountability or responsibility for what is in its search results.

The UK had enough of Google's games.  The UK’s Information Commissioner’s Office, the UK's data protection watchdog, recently ordered Google to remove links to ‘right to be forgotten’ removal stories.

EU 3 vs. Google 0

On September 21, 2015, France's privacy guardian, the Commission Nationale de l’Informatique et des Libertés (CNIL), rejected Google’s efforts to limit applying the EU's privacy ruling to Europe.

EU 4 vs. Google 0

Until now, Google has complied with the EU's court decision to remove links at the request of EU residents/nationals (and for some unexplained reason also including non-EU countries like Norway) to online material from its search results.  But Google continues to refuse to extend those privacy rights to its global domain Google.com.

Google has started to backtrack from its original untenable position and announced new measures which will take effect by March 2016.  When someone succeeds in asking Google to remove or block access to a link for legitimate privacy reasons, Google will now remove the link from its European domains, which it already does, and now also block access to the link from all of its global sites that can be used from the country where the request was submitted.  For example, if a person in the UK succeeds in convincing Google to remove a specific link, Google will also block that link from any search results from any google domain around the world when a user searches from the UK.

EU 5 vs. Google 0 ?

Google is clearly taking the minimal baby steps it thinks it can get away with to try and satisfy the EU.  How long until the EU forces Google to comply with its original intention?

Google's own data from its transparency report shows people are not abusing this right to whitewash history - as Google tried earlier to scare people into believing would be the case.  Google loves data, because data does not lie (though it can be manipulated, much like Google's public relations strategy against a person's right to be forgotten).  And today's media ensures anyone trying to whitewash history will be exposed.  Do not forget, Google only has to remove the link from its search results, it does not take anything off the internet.

Alphabet's (and Google's) current code of employee conduct reads: “Employees of Alphabet and its subsidiaries and controlled affiliates should do the right thing—follow the law, act honorably, and treat each other with respect.”   Alphabet/Google need a new corporate code which reads the same way.


Greg Kovacic, CFA

Holding Companies Accountable / Protecting My Online Identity, Reputation & Privacy
February 29, 2016

ad blockerDear "smart" silicon valley, "dumb pipes" not so dumb after all

Ad-blocking technology is a divisive topic.  On one side are online publishers providing content and silicon valley's tech giants and apps, which make billions of dollars collecting user data and pushing ads to mobile viewers over the telecom networks.  On the other side are the users, who are increasingly getting fed up with ad ad nauseum (pun intended), irrelevant and excessive ads, slow content loading speeds, and ultimately paying the cost for the data these ads consume as it travels over the telecom network and counts towards their data usage allowance.  And on another side, are the telecom networks which get stuck with the costs of building and maintaining the networks which allow all this to happen in the first place.  Online ad companies, such as Alphabet (Google) and Facebook make billions of dollars on data, and do not contribute anything to the costs of building and maintaining the actual telecom networks on which the data collected travels and the ads are delivered.

Companies like Shine and Apple already have created software for users to install on their mobile devices to eliminate or seriously curtail mobile ads.

Now, telecom operators are raising the stakes by putting such tools on the telecom network itself, and continuing to give control to the user to decide whether to accept ads or not.  Clever move for what has been considered "dumb pipes" delivering "smart content".

Last year, Jamaica-based telecom operator Digicel started working with Shine as the first operator to implement its technology to block ads on its networks in the Caribbean and South Pacific.

Three UK and Three Italia, two telecom networks owned by Li Ka-shing and his CK Hutchison Holdings, are now working with Shine Technologies to expand their ad-blocking technology to their networks and ultimately to other wireless telecom providers in their group.

Three UK is staking out a smart position of giving customers more control over mobile ads - something silicon valley tech giants have resisted for fear or upsetting their cash machines which collect user data and deliver ads.   Three UK is clear users should not have to pay data charges for mobile ads, and users' privacy should be protected from ads that gather data.  And if the user wants, they should be able to receive non-intrusive and relevant ads.

Power to the people!


Greg Kovacic, CFA

Tagged with: ad-blocker
Cybersecurity & Safety / Holding Companies Accountable
February 29, 2016

Facebook dislikePutting the "boo" in Facebook

Did you know any Facebook user can set up an open, closed or secret group.  Secret groups cannot be found using Facebook's search facility, only members can see what content is inside them, and only people invited by existing members can join the group.  Why would Facebook allow secret groups in the first place?  Can there be any healthy or decent use for any "secret" group?  The BBC found one being used for online child sexual abuse content.

Facebook [predictably] gives its standard public relations meaningless response saying it removes content which is a "solicitation of sexual material, any sexual content involving minors, threats to share intimate images and offers of sexual services".  Well, Facebook actually only acts after the material has been posted, discovered by someone else and then flagged for review.  Too little to late.

Parent's worst nightmare (one anyway)

The BBC discovered innocent pictures were posted of one mother's 11-year old daughter after they had been copied from the mother's own blog site and then posted on a Facebook site used by paedophiles and swapped by members, who also posted sexual comments about them.

"But equally upsetting is the fact that Facebook allows these secret groups to exist, unmonitored and unchecked, making them rife for abuse by paedophiles."

If Facebook can quickly remove terrorist profiles, why so slow to remove obscene children sexual abuse content?

All the brainpower at Facebook and it is unable to create an algorithm which screens any uploads and prevents any such inappropriate materials from being uploaded on its sites?  Our digital world is all 1s and 0s, right?

Shortly after the December shootings in San Bernardino, California, a team at Facebook had already removed a profile for Tashfeen Malik, after seeing her name in news reports.  So Facebook can actually act quickly, but only when it thinks it will get an immediate PR black eye.  As government pressure mounts on Facebook to do more to remove terrorist content from its site, it is more aggressively policing material it views as supporting terrorism. Facebook now is quick to remove users who back terror groups and investigates posts by their friends.   Facebook also has a team solely focused on removing terrorist content and helping promote “counter speech" which aim to discredit militant groups.

Facebook uses profiles it deems supportive of terrorism as starting point to identify and delete associated accounts that also may post material that supports terrorism.

Facebook could put in place all of these same simple measures to protect our children, if it really wanted to.

"If it’s the leader of Boko Haram and he wants to post pictures of his two-year-old and some kittens, that would not be allowed."
- Monika Bickert, Facebook’s head of global policy management

But child porn is apparently ok on Facebook...

Dear Facebook, Digital Trust must be a forethought, not an afterthought.


Greg Kovacic, CFA

Government Regulations / Holding Companies Accountable / Protecting My Online Identity, Reputation & Privacy
February 23, 2016

Apple + Orange

No doubt if you are interested in Digital Trust, you are following Apple's refusal to abide by a lawful US court order to help the FBI bypass the passcode security feature of a single iPhone which belonged to a person who shot and killed 14 people in December in California.  I wonder if this was Osama Bin Laden's iPhone, if Apple would be so coy.

Some say Apple is leveraging this for free global marketing.  Possibly.  The reality is more likely Apple, nor any other tech company, can be seen to be caving in to the government on privacy, and once the laws change, Apple will too.  Until then, Apple will of course play the part of the victim, scoring major sympathy points.

The reality is the current laws are behind the digital times.  The US government needs to update the current laws for the current state of affairs where communications will increasingly take place via mobile digital devices.

My view is no matter how much lobbying Apple, Google and Facebook do, this legal update will happen, simply because it has to as the alternative is to leave the police incapable of doing anything in a digital world.  When the tech companies fail to protect the public's safety, the government regulators are forced to step in to fill the void.

Balancing the public's right to safety with a person's right to privacy

The key is whether the powers that be can craft smart regulations which create a level playing field for the entire communications industry, which includes telecom service providers, ISPs, search engines, social media platforms, mobile device manufacturers, other digital communication tools etc., as well as balance a person's right to privacy with society's right to safety.

My view is society's right to safety outweighs a single person's right to privacy - when decided by an impartial court of law separate from the government itself.  This last part is the key, as impartial courts of law do not really exist in many countries, other than on paper.  The US is different than most countries with a clear separation of government and the legal system.  If a court issues an order, the order should be followed.  Apple is not above the law.  It is playing a public relations game around the clarity of an old law and how it applies in the digital age.  Apple has a right to challenge the court order as the US legal process allows it to do so.  Ultimately it will be the US Supreme Court which fills the legal void left by an out-of-date US Congress, which is what will likely happen.

The law is clear for telecom carriers in the US - and was designed for the pre-digital world

In 1994, almost 22 years ago and 13 years before the first iPhone was launched, the US Congress passed the Communications Assistance for Law Enforcement Act which required telecom carriers to build surveillance capability into their networks. This law was later amended to cover voice calls placed over the internet, but not all internet communication.  A court issues an order, the telcom carriers comply and provide law enforcement access to wiretap specific parties' communications.  Simple.  The requirement for phone companies is the government has the ability to intercept telecom traffic in real time.  Much has changed since this law took effect.

Apple, Google and Facebook already cooperate with the government

Government and law enforcement can currently ask phone and internet companies (e.g. Facebook, Apple, Google) to turn over all customer info they retain on their services, and increasingly the government is asking tech companies to do so.  And they all do, including Apple.  A grey area is there is currently no requirement for phone companies or internet companies on how long they must store such data.

Apple says it can provide customer data stored in its iCloud service, such as phone backups that can include stored photos, email, documents, contacts, calendars, and bookmarks. In the San Bernardino case, Apple has provided data for Mr. Farook until Oct. 19, the last time his phone synced to his iCloud. There are 44 days of data,  such as iMessages and FaceTime calls which may only exist on his passcode-locked iPhone.  This is where Apple's hypocrisy gets called for what it really is - it cannot comply with some government requests, but choose not to comply with others, and still play the privacy public relations game it is currently playing with a straight face.  If you want to take a moral position, you are either all in or all out.  You can't be half drunk.  If you play in the middle, you are just playing a game, and not a game based on moral "values".

Digital handcuffs for the police?

It is only a matter of time before the US updates these laws for the digital age.  You can be sure the lobbying arms of Google, Apple, and Facebook, which all oppose any type of government regulation in their space are in full bribery mode, I mean political "donation" mode, during an important election year including a US Supreme Court vacancy.

But given the proverbial digital handcuffs the US Congress would be placing on law enforcement by failing to update the laws for the digital age, it seems like a long shot the tech lobbyists will win this battle of wills.  It seems the government has been trying for a while now to find the right type of legal case, one narrowly defined around a single user but with huge importance to be able to apply broadly to other specific individuals without being labeled "mass surveillance", to get this difference of opinion legally resolved once and for all.  As Clint Eastwood's character said in the movie Heartbreak Ridge, "It's my will against yours, and you will lose."

The real danger in the US is the average age of elected government officials tasked with crafting laws: The average age of a Congress person is 57, for Senators it is 61.  Not old by any means, but not exactly at the forefront of digital technology users either.

We the people need to push our governments everywhere around the world to enact smart regulations which balance the public's right to safety with a person's right to privacy, in a way which requires an independent court of law to decide to issue the order or not.  And related, anything found unrelated to the initial court order is rendered inadmissible in any court of law.  If we leave it up to government bureaucrats and tech company lobbyists, we will get something worse - a system which neither protects the public's right to safety nor a person's right to privacy.


Greg Kovacic, CFA

Holding Companies Accountable
February 23, 2016

George, 35, protests with other commercial drivers with the app-based, ride-sharing company Uber against working conditions outside the company's office in Santa Monica, California June 24, 2014. REUTERS/Lucy Nicholson (UNITED STATES - Tags: BUSINESS EMPLOYMENT TRANSPORT CIVIL UNREST) - RTR3VKJ9

Uber is rapidly becoming the global poster child for how NOT to do employee background checks.  I stand corrected, Uber does not actually consider their drivers who perform their services for customers to actually be employees, but rather independent contractors, so they do not have to afford them the same protections an "employee" receives at every other company in the US.

First was India, where Uber originally did not do background checks on its drivers until one kidnapped and raped a young woman.  Only after this unfortunate incident, did Uber make driver background checks mandatory.  Good thinking Uber.  If Uber had done driver background from the beginning, it would have uncovered this particular driver had several previous encounters with the law, and even another rape charge.  Uber probably would have passed this driver anyway in its rush for scale.

In California, the government alleges in a complaint filed in court, Uber failed to screen out 25 drivers with criminal records, including registered sex offenders, identity thieves, burglars, and one person convicted of kidnapping and murder.  Sex offenders! Kidnapping and murder!  Lions and tigers and bears, oh my! (for those who know the Wizard of Oz). The convicted murderer, released on parole in 2008 after serving 26 years in prison, had driven 1,168 Uber rides.

The latest Uber fail is in the US, where a new driver went on a shooting rampage randomly killing 6 people, all the while continuing to pick up passengers in between his shooting spree.

Uber's position is Dalton passed Uber’s background check and Dalton had no prior criminal record. So a background check, including fingerprinting or not, would not have flagged Dalton as a risk.  This is where Uber misses the point.  If Uber put digital trust first, it would have included in the app a panic button so riders, or even other drivers, can alert nearby police of erratic or disturbing behaviour.  This sad event could have been brought to an end much sooner and with fewer deaths.  Another proactive step Uber can take is to have changes in its driver's background checks get flagged in real time when their check changes after they are working for Uber.  Uber currently has no system to keep current on the background check status of its drivers after the check is done.  As long as Uber puts scale first, it's customer safety will come second, if at all.

Uber, and other large tech companies are failing their customers' safety and security in their rush to scale.  Uber is losing US$1bn a year in China. yet it cannot even spend the time, money and effort to put in place basic safety measures like a rider (and driver) panic button connecting to local police.

Going forward, when a large tech company fails its customers' basic safety and security, I'm going to start calling it an "Uber fail", because no matter how large Uber's current valuation is, it is no excuse for failing to provide its customers upfront with basic safety and security.  Uber should stop rushing for scale (unprofitable as it is by most accounts), and start rushing for safety and security - THAT is the real advantage in today's Digital Trust world.

How long until Uber fails society again before it gets its act together?  Or do regulators need to step in and protect its people first?

Digital Trust must be a forethought and not an afterthought.


Greg Kovacic, CFA

Tagged with: uber
Cybersecurity & Safety / Preventing Online Fraud & Scams / Protection From Malware & Viruses
February 23, 2016

Keyboard - scamEvery holiday has now become a chance for scammers to try and steal your personal data such as credit card numbers, personal info such as usernames and passwords, and even infect your computer and mobile devices with malware to extend their attack to your contacts in your address book.

One of the most common holiday scams is scammers presenting themselves as a legitimate business you are expecting to hear from such as an online florist, e-card or delivery company.  Scammers prey on people's curiosity.  Curious who sent you that e-card you received or package delivery email?  Odds are you will open it before really thinking through whether it is real or not.

The most common Valentine's Day scams have included:

(1) Fake e-cards: A red flag should be waving in your brain if you receive a harmless sounding, but vague, subject line like, “Someone you know just sent you an e-card.”   Legitimate e-cards usually include the name of the person who is sending you the e-card in the subject line.    Scammers also lower your suspicion by directing you to a website that mimics a popular greeting card site (e.g. Hallmark, American Greetings, or PaperlessPost).  When you click on the link to open the card, you will load malware onto your computer that opens the door to endless spam for you and your address book contacts.

(2) Fake package delivery:  Scammers creating phony delivery emails.   If you receive an email about a package you didn’t send or a delivery you don’t expect, don’t open it.  And never download a form or click to a separate website.  The only package being delivered here is malware.

(3) Phishing florists: Phishing emails try to fool you into revealing credit card and other personal info.  Valentine’s Day scams are emails claiming to be from a florist warning you what you ordered cannot be delivered unless you log in and re-enter your credit card info.   This scam plays on the odds of sending it out to enough people, and it will eventually reach people who actually did order flowers and are worried they might not show up on time.

Only the paranoid survive in our world where Digital Trust comes second, if at all, on large tech companies' agendas.  Confirm it before you click it.  Protect yourself, because the large tech companies are not focused on solving these problems they indirectly have created for you.


Greg Kovacic, CFA

Tagged with: scam, valentine