The increasing encroachment of audio-activated devices into our personal lives presents unique privacy challenges. Recent events have prompted the review of how audio-activated devices collect, store, transmit and use data, prompting concern and highlighting a number of long term lessons to be learned.
Many concerns arise in relation to the use of the related “wake word” or phrase initially used to interact with the device. Cases which typically highlight the propensity for devices to record personal interactions are those which involve police warrants. Indeed, in 2018 a state judge ordered Amazon to turn over all recordings from an Echo device present at the murder scene of Christine Sullivan and Jenna Pellegrini. However, Privacy International highlights that such practices have been occurring since late 2015.
These matters are further magnified by the recent confirmation that contractors are reviewing Echo recordings. This issue was highlighted by Bloomberg in April 2019. It was stated that any audio access was for the purposes of improving and developing the audio feedback system. This involves the Echo device logging all voice interactions and sending them to Amazon’s own Cloud. Then contractors transcribe, annotate and fed back into the software for the purpose of creating a better library of voice commands. It should be noted that such data harvesting practices are commonplace, typically implemented to assist in the software development and machine learning process.
These processes are high volume with the view to training the Alexa software algorithms and updating the accuracy of the voice recognition process. Think how some calls are recorded for training purposes but this time notifications are in the terms and conditions with the potential to be far more invasive. This in and of itself provides for a number of issues, many of which are exacerbated by the insertion of a human element.
In response Amazon issued a statement in mitigation highlighting that it handled the information sensitively, removing any personally identifiable information.
Amazon’s FAQ’s – shedding light on usage
Unsurprisingly, Amazon clarified the data transfer and retention of voice recordings, stating in its Alexa FAQ’s:
“ 3. Is Alexa recording all my conversations?
No. Echo devices are designed to detect only your chosen wake word (Alexa, Amazon, Computer or Echo). The device detects the wake word by identifying acoustic patterns that match the wake word. No audio is stored or sent to the cloud unless the device detects the wake word (or Alexa is activated by pressing a button).”
Alexa users can also access the logs Alexa stores of their own device’s recordings and delete them at anytime. The pertinence of this becomes clear when it becomes clear Alexa does the following:
“5. How are my voice recordings used?
Alexa uses your voice recordings and other information, including from third-party services, to answer your questions, fulfill your requests, and improve your experience and our services. We associate your requests with your Amazon account to allow you to review your voice recordings, access other Amazon services… and to provide you with a more personalized experience.”
Furthermore, the nature of the review process itself is now highlighted and explained:
“6. How do my voice recordings improve Alexa?
Training Alexa with real world requests from a diverse range of customers is necessary for Alexa to respond properly to the variation in our customers’ speech patterns, dialects, accents, and vocabulary and the acoustic environments where customers use Alexa. This training relies in part on supervised machine learning, an industry-standard practice where humans review an extremely small sample of requests to help Alexa understand the correct interpretation of a request and provide the appropriate response in the future.”
The extent of Amazon’s voice modeling system is highlighted by the creation of voice profiles which are created by the software in the form of acoustic models. This allows Alexa to identify users by voice similar akin to a vocal fingerprint.
Concerns and misuse
Alexa’s data harvesting and retention processes are clearly broad and very automatous. Concerns have previously been raised about the recording of background conversations. In many cases, these result from glitches in the system inadvertently placed wake words or an error in the voice recognition system.
However, the privacy implications of the invasive nature of the recording and review of messages cannot be ignored. This is primarily due to interactions with Alexa taking place in environments where an individual has a reasonable expectation of privacy- their own home or vehicle, for example.
These are areas which most often garter the privacy rights enshrined in Article 8 of the European Convention of Human Rights. And for good reason. Personal autonomy and dignity are key principles which allow individuals enjoyment of these spaces. The interference of a third party corporation, typically uninvited and unanticipated, in such interactions, seems disturbingly close to invading the most private of personal spaces. In doing so they undermine the societal norms which previously determined what constitutes the reasonable respect for an individual’s private life.
Whilst such invasive recording and third-party sharing is the exception rather than the norm, developments in technology and their greater influence in our lives may, conceivably, result in a shift in what scenarios constitute a reasonable expectation of privacy. Such an expansion may be symptomatic of an incremental encroachment of technology into our private lives. For example, should Alexa record and send a private conversation between a married couple to the Amazon Cloud for reviewing could spousal privilege, data protection, breach of confidence or article 8 rights be asserted successfully?
This invasive recording and relay are typically tempered by four factors; consent, opt-outs, limitations, and security.
- Consent: Individuals, in purchasing and Amazon Echo and using the Alexa function, consent to the recording and processing of their interactions. Amazon Echo’s terms and conditions cover this and clearly provide a legitimate basis for the processing of such data.
- Opt-outs: Customers are allowed to opt-out of any data processing, including the use of their voice commands in the improvement of Alexa’s voice recognition software. This is made clear and unconditional. Even better opting in for sending voice data should be the norm.
- Limitations: The use of the voice prompts given to Alexa is limited to the specified purpose of improving the software and data is retained only for this purpose. Furthermore, there are limitations on the usage of data and who has access to the data to ensure interference is as limited as possible. The period of data being stored and accessibility is limited.
- Security: Being stored on Amazon’s Cloud renders data vulnerable in unique ways. Therefore, extra care must be taken to anonymize, encrypt and otherwise protect the data provided.
We have seen competitors Apple and Google implement these frameworks, particularly processes which treat any voice recordings sensitively, very successfully. Given the recent controversy, Amazon is now doing the same, matching its competitors on this hot topic issue.
What is clear is that this is an instance of societal norms shifting to invade what was otherwise a private space. Whether this will have an impact on what scenarios attract a reasonable expectation of privacy has yet to be resolved but is a pressing and compelling issue. Where an individual allows a device into their home which cedes their privacy in a very specific way the maintenance of their privacy rights must be considered and should be by the courts for the sake of clarity. This conceptual scenario is ripe for a test case.
In an illustration of the live nature of this issue, Apple has recently issued a statement on improving the privacy protections of its own virtual assistant Siri. This serves to show just how far along Apple is- voice clips are associated with a random identifier, dissociation from this happens after six months, data use is minimized.