Everything Is Not Terminator: Public-Facing Artificial Intelligence Policies – Part 1

Weaver, JohnBy John Weaver

For some time now—in response to the California Online Pri­vacy Protection Act, Canada’s Personal Information Protection and Electronic Documents Act, and similar statutes and regulations from other jurisdictions—any company with any web presence to speak of has provided a public-facing privacy policy on its website, explaining what it does with each user’s information, how it com­plies with the relevant laws, what rights users have to access their information, etc. These policies have become much more promi­nent in 2018, as the EU’s General Data Protection Regulation became effective and thousands of companies notified their contact lists that their privacy policies had been updated.

Although artificial intelligence is not nearly as well regulated as data privacy and is, in fact, hardly regulated at all, there are some requirements, expectations, and norms that are emerging from a combination of expert opinion, pending legislation, and the limited black-letter law. In response, we have begun advising clients about AI policies. These are public-facing policies that state certain information about how companies use AI in their business operations.

Read more here.

Does Your Targeted Advertising Violate the GDPR?

Weaver, JohnBy John Weaver

Targeted advertising has emerged as one of the most important marketing tools of the last decade. It relies on data analysis of users or viewers in order to help the advertising company identify the most receptive audience and show advertising only to that demographic. In early forms, the advertising company selected a program or publication that was known to be popular among the desired audience: sports car ads went in Popular Mechanics to appeal to adult males; family wagon ads ran during Family Ties to appeal to adults with children; etc. This still exists, but online activity and mobile device usage has vastly improved the ability of marketers to identify and target key audiences in those mediums. Instead of appealing to broad demographics like adult males or adults with children, marketers can identify much more specific audiences: college-educated women between the ages of 39 and 50 with incomes between $50,000 and $100,000; males between the ages of 14 and 18 that have recently read The Ringer; etc.

Continue reading

Target Reaches Settlement Agreement with 47 States for Data Privacy Breach

By John Weaver

Target has agreed to pay $18.5 million to settle a lawsuit involving 47 states and the District of Columbia related to a 2013 cyberattack that affected the data privacy of more than 41 million customers. The hackers gained access to Targets customer service database, capturing full names, phone numbers, email addresses, payment card numbers, credit card verification codes, and other sensitive data from those customers.

Continue reading

Artificial Intelligence Owes You an Explanation

When an A.I. does something, you should be able to ask, “Why?”

By John Weaver

As published on Slate.com (May 2017)

My family has grown very attached to our Amazon Echo, particularly for music. We can access Prime Music by asking Alexa for an artist, song, or station. Even my young kids can navigate the verbal interface to request “Can’t Fight the Feeling” from the movie Trolls or the soundtrack to the musical Hamilton.

As part of the smart speaker’s artificial intelligence, the program picks up on our tastes and preferences, so when we simply say “Alexa, play,” the device will queue up suggested tracks. In theory, what it picks should have some obvious relationship to music we chose ourselves. And songs it selects usually do. Usually.

But recently, Alexa considered our diet of kids’ music, show tunes, the Beatles, the Rat Pack, and Pink Martini, and decided to cue up … Sir Mix-a-Lot.

To read the full article on Slate.com, click here.

BYOD: Has Your Company Addressed Its Privacy and Data Security Risks?

By Cameron G. Shilling

(co-authored by: Colleen Karpinsky Cone, VP Talent & Culture, DYN)

As published in ACC Docket (October 2015)

Bring your own device, or BYOD, presents significant privacy and data security risks to companies. To reduce these risks, businesses should implement appropriate written data security and information use policies and procedures, before a disaster occurs.

BYOD appeals to both companies and their employees. Employees prefer to select the type of mobile device they want to use for business and personal purposes. Companies use BYOD to avoid some or all the costs of purchasing and supporting mobile devices for employees, and to simplify the processes when hiring employees and when employees depart.

When an employee uses a personal device to perform work and access the data systems of the company, valuable business information accumulates on the device. The presence of that data on the device is a security risk if the device is lost, stolen, or compromised, and privacy concerns can arise if the company needs to access the device to recover its data. These issues should be properly addressed in written data security and information use policies.

Several state and federal laws require companies to implement security measures to safeguard sensitive information. The Massachusetts and California data security laws and the Health Insurance Portability and Accountability Act, or HIPAA, are good examples. These laws require encryption of ‘data in motion’, such as data transported on mobile devices, laptops, and USB drives, and data transmitted electronically by email and in other ways. BYOD companies often do not encrypt data in motion on employee-owned mobile devices, and devastating data breaches have resulted from the loss, theft, and compromise of such devices.

Mobile device management, or MDM, is currently a good technology solution for encryption of business data on personal devices. MDM is not only generally commercially available and technologically viable, it also provides companies with other benefits, such as the ability to monitor an employee’s remote business activities, and to remotely erase company data from lost and stolen devices and from the devices of departing employees.

Encryption technology also is readily available for laptops and business email systems; dual authentication virtual private networks, or VPNs, provide employees with encrypted access to company systems from offsite; and secure portals and similar technologies can be implemented for the encrypted transmission of large amounts of data. In short, encrypting sensitive data on personal mobile devices and during data transmission, like email, is no longer optional under data security laws.

Privacy concerns with personal devices present equally serious issues. The company data that accumulates on such devices mixes with personal data of the employee. Because the employee owns the device, the company does not have unfettered access to its data on the device, particularly for disgruntled and departed employees, and even cooperative employees can have legitimate concerns about handing over their personal devices to corporate officials. Also, the company has little (if any) control over apps that employees download to their personal mobile devices, and malicious apps pose threats to company data on the device and can provide access through the device to company servers and other data stores.

In addition to these difficult personnel issues, recovering business data from a personal mobile device can be a legal minefield. An unauthorized interception of an electronic communication, such as an email or text sent to a personal email account or cellphone number connected to the device, can violate the federal Electronic Privacy Communications Act. Likewise, unauthorized access to stored electronic communications, such as an employee’s Facebook, LinkedIn, or other social media account, can violate the Stored Communications Act.

Beyond those two federal statutes, an employee also may assert a common law claim that the company’s intrusion into the employee’s personal device violates the employee’s legitimate expectation of privacy. In 2014, the U.S. Supreme Court recognized in Riley v. California that, as a society, we have developed a strong sense that the data on our personal mobile devices is private. The Court explained its reasoning as follows:

Modern cell phones are not just another technological convenience. With all they contain and all they may reveal, they hold for many Americans ‘the privacies of life’ …. The fact that technology now allows an individual to carry such information in his hand does not make the information any less worthy of the protection for which the Founders fought.

Sound information use policies and technology practices are the best solutions to avoid data privacy problems. A company should clearly notify its employees in its information use policy that the company owns its business data, and that employees cannot have any expectation of privacy with respect to their possession or use of it. A company also should notify its employees that the company has a right to access employee-owned devices to recover business data, and the company should establish parameters in its policy for doing so. And, company IT personnel need to be properly trained to avoid intentional and inadvertent violations of the federal statutes mentioned above when accessing personal devices.

BYOD is not likely to subside – if anything, its prevalence will increase. Companies that foster this practice should address the privacy and data security concerns of BYOD, by implementing appropriate written data security and information use policies and by adopting sound technology practices, like MDM and encryption.

What Does It Really Take To Be Data Security Compliant?

By Cameron G. Shilling

As published in NH Bar News (12/20/2016)

Most businesses know (or should know by now) that they must comply with state and federal data security laws and regulations. But business leaders often are unaware of what it really takes to do so. That is understandable. Data security seems complex, and technology consultants and vendors rarely try to demystify it for their customers.

Data security is just like any other legal or business risk management issue. The risk is managed through a process of collaboration between business leaders, information technology professionals, and qualified legal counsel. The process involves the following steps:

  1. Perform a risk assessment of the business’ physical, technological and administrative systems using the requirements and standards of applicable laws.
  2. Generate a report that identifies areas of non-compliance and risk, including a prioritization and chronological plan for remediation.
  3. Remediate vulnerabilities that can feasibly and financially be fixed within a reasonable amount of time.
  4. Create a written data security plan tailored to the procedures of the business.
  5. Train employees about data security compliance generally and the business’ procedures under the written data security plan.
  6. Perform periodic reassessments, including sub-assessments if new or different physical, technological or administrative systems are adopted.

Step 1 – the risk assessment – involves identifying the information a business has that is legally protected, for example, under state data security laws or under federal laws or regulations such as HIPAA, the Gramm-Leach-Bliley Act, or SEC or FCC regulations. The information is then mapped through its lifecycle (e.g., from receipt and creation, through use and transmission, to disposal and destruction), and areas of non-compliance or risk are identified using the legal requirements and standards of applicable laws and regulations.

This is a highly collaborative process between the leaders of the business, competent IT professionals (inside or outside the business, or both), and legal counsel experienced with this area of the law and qualified to understand technological and physical security matters.

Step 2 – the report – flows naturally from the areas of non-compliance and risk identified in the assessment. Priority is assigned to items that are relatively easy to remedy, do not comply with applicable law or entail significant risk, and a timeline is created for addressing the issues.

Step 3 – the remediation – is the process of identifying and implementing solutions to the vulnerabilities identified during the assessment and in the report. Remediating vulnerabilities often depends on the availability of technological or physical systems, and budgetary constraints of the business. It is common for a business to need 12-18 months to properly address all of the vulnerabilities identified in an initial data security risk assessment.

Step 4 – the written plan – is a policy created from the information gathered during the risk assessment and the remedies implemented or anticipated for the vulnerabilities. A plan created in the absence of a comprehensive risk assessment is a pure shot in the dark, and does not comply with state or federal law or accepted practice. No two data security plans are the same because no two businesses are the same, and there is no competent boilerplate form.

Step 5 – the training – is an integral component of data security compliance. Employees handle protected data on a daily basis, and thus need to be taught about data security generally as well as the business’ specific procedures as set out in the written plan. Likewise, properly trained employees know better how to avoid breaches, how to recognize an actual or potential breach, and how to properly respond in such circumstances.

Step 6 – the reassessment – is required and natural for any business committed to data security. Reassessments are used to address vulnerabilities from new or different technology, physical or administrative systems or external threats. Also, as a business that becomes data security aware, it frequently identifies previously unknown vulnerabilities and adopts remedies that enhance security beyond the measures implemented after the initial risk assessment and report.

Data security is not something that can or should be overlooked simply because a business does not understand how to become compliant. Just like any other risk management issue, security is accomplished through an established process of business leaders, IT professionals, and qualified counsel working collaboratively to implement an established process under applicable law.

Know the Law: Who is Liable for Data Breach?

By Ramey D. Sylvester

As published in the Union Leader (12/19/2016)

Q. My company handles a lot of sensitive customer information (medical, financial, biographical) and has relationships with third party service providers that have access to the information. Can my company be held liable to our customers for my service provider’s mishandling of that data?

A.  Bad news first. Not only may your company be liable to your customers, your company may have to engage in costly notification and disclosure efforts, and may be subject to governmental auditing and penalties all due to your service provider’s mishandling of your customers’ sensitive information.

In today’s computer and cloud-based business world, customer data can be accessed, and is often stored, by a company’s service provider or “vendor.” Vendors providing services such as: Software as a service (SAAS), payment processing, accounting, document destruction, and external IT all commonly have access to, and store, sensitive information of their clients’ customers. Even your office supply delivery company, cleaning service, and building maintenance company has access to your customer information and could cause a breach either knowingly or accidentally.

Depending on the privacy laws and regulatory requirements your company is subject to, you may be required to ensure that vendors are equipped to properly secure your sensitive customer data. Regardless, your company will be responsible for your vendors’ failure to maintain the confidentiality of your customer data and for choosing to work with a vendor that is not data security compliant. Should your vendor suffer a data breach, your company will be on the hook for customer notification requirements, governmental investigations, and penalties, in addition to any customer legal action.

So what can you do to minimize these risks? Establish a vendor management program to assess your vendors’ ability to handle sensitive customer data. If the vendor will be handling sensitive customer data, make sure that the vendor has a data security policy and data breach response plan. Further, require the vendor to have cyber insurance policies that will cover the costs of data breaches, and have the vendor sign a data security agreement that will require it to maintain the confidentiality of the customer data, require it to indemnify your company for unauthorized disclosures of customer data, and establish auditing rights that will enable your company to ensure that the vendor is maintaining its data security standards.

The bottom line is that since your company will be responsible for the mistakes of your vendors, you should take appropriate legal steps to protect your company and your customers.

Know the Law: Some Data Collecting Requires Disclosures

By Kevin Lin

As published in the Union Leader (12/5/2016)

Q. My website allows customers to create user accounts, saves their contact information and tracks their purchases to suggest new items they may want to buy. Are there any disclosures I need to make about my customer data collection?

A.  One regulation governing consumer data collection is CalOPPA, a California statute that seeks to improve the transparency of a company’s data privacy practices. A New Hampshire business is subject to CalOPPA if it gathers personal information online about any California resident.

This information includes first and last names, addresses, emails, telephone numbers, and other similar information. Since most online footprints are nationwide and it is often difficult to differentiate California residents from other customers, businesses should simply comply with CalOPPA to avoid unknowing violations.

CalOPPA requires that a business post its privacy policy on its website identifying exactly what consumer information is collected and with whom that information is shared. The law also requires that the privacy policy inform consumers about the process for reviewing and requesting changes to any information collected, and that it specify how consumers will be notified of changes to the policy. Additionally, the most recent amendments require the privacy policy to detail how the business will respond to web browser “do not track” signals.

Violations of CalOPPA are enforced through California’s Unfair Competition Law. A company that does not comply with CalOPPA may be subject to penalties of up to $2,500 for each violation. With respect to mobile applications, the penalty is assessed each time the application is downloaded by a California resident.

In 2012, the California Attorney General informed hundreds of noncomplying companies (including those outside of California) that they would be fined if they did not bring their mobile applications into compliance. More recently, California Attorney General Kamala D. Harris released a new tool for consumers to report noncomplying websites, mobile applications and online services.

Given the rise in enforcement and the potential risk of exposure, it is crucial that all New Hampshire companies review their privacy policies to ensure compliance with CalOPPA.

Know the Law: Who is Liable for Chip-Based Credit Card Fraud

By Cameron G. Shilling (originally published 11/23/2015)

As published in the Union Leader (9/14/2015)

Q.  More and more of my customers are paying with credit cards that have chips in them.   Do I need a chip-based credit card reader?

A.  Credit card companies – not retailers or consumers – have historically absorbed the liability for fraudulent credit card transactions.  That will change on October 1, 2015.  If your business does not use EMV equipped card readers to process credit cards that utilize the new chip technology, then your business – not the credit card company – will be liable for fraudulent transactions.

The credit card industry in the United States has been transitioning for the last several years to cards that utilize embedded chips, in addition to the older magnet strip technology.  The reason is that the vast majority of credit card fraud occurs from the “skimming” of numbers from “swiping” a card’s magnet strip through a card reader.  Target, Home Depot, and TJX are just a few examples of such recent breaches affecting hundreds of millions of consumers.

Retailers outside of the United States started many years ago transitioning to chip technology, which is called “EMV.” Outside of this country, about 70% of all credit card readers employ EMV technology, compared to the relatively negligible adoption of EMV domestically.  As a result, the approximately $10 billion of annual domestic credit card fraud accounts for nearly half of global fraudulent credit card transactions, even though only about one quarter of all credit card transactions worldwide occur in the United States.

On October 1, 2015, there will be a change to the rules that major credit card companies apply to retailers and other credit card processors.  If fraudulent transactions occur using cards with chips, and the retailers/processors did not use EMV equipped card readers, then the retailers/processors – not the credit card companies – are liable for the fraudulent transactions.  By contrast, if a retailer/processor uses an EMV reader to process a chip equipped card, the credit card company is liable.  Also, credit card companies remain liable for fraudulent transactions using credit cards equipped only with a magnet strip and not the chip technology.

Because about 40% of credit cards in the United States presently have embedded chips, domestic retailers and credit card processors face significant potential liability for fraudulent transactions.  As a result, if your business processes credit card transactions, you should promptly convert to EMV enabled credit card readers.