Do Your Artificial Intelligence Practices Pass the Turing Test?

Is it just us, or have the past few weeks felt very sci-fi’ey? Our newsfeed has been flooded with stories about artificial intelligence and the fact that Tom Cruise apparently doesn’t age – and of course, Stranger Things Season 4 Volume 2 dropping next week. In this edition we discuss how artificial intelligence can be both a blessing and (Vecna’s) curse, and what companies should know to avoid getting in hot water. Plus, some cannabis-related news!

Me, Myself, and I, Robot

Google made headlines earlier this week when it suspended one of its engineers who claimed that Google’s LaMDA artificial intelligence chatbot had achieved sentience. (No judgment if you have to Google that word . . .). While the various news articles on this issue center on whether we are indeed about to welcome our robot overlords, this piqued our interest for different, yet equally nerdy reasons. It’s reported that Google went so far as to suspend the engineer for violating Google’s “confidentiality” policy, which we assume is pretty extensive. According to the engineer, he was just trying to inform the general public about how “irresponsible” Google is being with this technology. This, as we call it, sounds a lot like “whistleblowing”. If Google takes the next step at firing the engineer, we wouldn’t be surprised if he hits back with a retaliation claim.

We now take this opportunity to inform you of the dangers of firing a suspected whistleblower. Generally, for an employee to be a “whistleblower”, they must allege that an employer has engaged in some sort of illegal act. In those cases, an employer generally can’t fire that employee for “blowing the whistle”. Google is an interesting case study as the engineer isn’t exposing any alleged legal violation – just the potential dangers of sentient artificial intelligence. And, Google’s position will likely be that this was nothing less than an improper disclosure of confidential information, which is not exactly legally protected. We will definitely be following this to see what happens next.

Should Go Without Saying – But Make Sure Your Robots Aren’t Discriminating

Machine learning has come a long way and although we haven’t quite reached the Singularity yet, many employers rely heavily on machine learning, data analytics, or other similar technologies to weed out potential candidates. Local agencies and legislatures have been hard at work to make sure these machines aren’t accidentally (or intentionally) programmed to be racist, sexist, or ageist. Indeed, effective January 1, 2023, employers in New York City will no longer be able to use automated tools to screen candidates unless those tools undergo an independent bias audit within one year of their use. Employers who use automated screening tools must update their websites to include a public notice with the results of the most recent audit report and the distribution date of the tool.

Employers must also: (1) provide candidates with advanced notice that the employers will use an automated screening tool; (2) disclose the factors that the tool will use; and, (3) allow candidates to request an alternative selection process. The law further provides for employers’ transparency with respect to the data collected by the automated process, the source of the data, and the applicable retention policy for that data. Employers can expect fines ranging from $500 to $1,500 per violation.

We’re not done yet . . .

The EEOC recently chimed in on this matter and issued guidance on employers’ use of artificial intelligence to screen job applicants, providing that an employer’s use of “algorithmic decision-making tools” could violate the Americans with Disabilities Act. How, you ask? By, among other things, not providing the candidate reasonable accommodations to complete the evaluation process in the first place (i.e., additional time), or programming the software to screen people out who may not meet certain objective criteria as a result of their disability (i.e., a significant gap in employment as a result of needing to undergo medical treatment). Another blatant example provided by the EEOC: an algorithm scoring an individual’s ability to problem solve unfairly due to a speech impediment. The point is, robots are not humans and can’t possibly be programmed to account for and accommodate all disabilities, so be careful not to rely too much on technology.

New Jersey Marijuana Dispensaries Are Opening Up – Employers Take Notice!

If all this science fiction and artificial intelligence talk is making your head spin and stressing you out, looks like over a dozen marijuana dispensaries have opened up in New Jersey for the sale of legal recreational marijuana. These cannabusinesses have opened up in several locations such as Elizabeth, Paterson, Bloomfield, and Maplewood. As the cannabis industry continues to grow, it’s a good time to revisit your drug and alcohol policies – not to mention any other employment and general business law compliance policies – and, if you’re in an industry where employee and client safety are primary concerns, consider adopting a policy addressing being under the influence while on the job.

We hope you enjoyed this edition and, although we don’t have computer brains just yet, as always, if you’ve got questions – you know we’ve got answers.

 

« »
A law firm where professional meets personable. That’s us.