Striking a Balance: Navigating the Ethical Landscape of AI in the Digital Age
In the groundbreaking film, CitizenFour, Edward Snowden’s statement “Every border you cross, every purchase you make, every cell phone tower you pass, friend you keep, article you write, site you visit…is in the hands of a system whose reach is unlimited but whose safeguards are not” sparked a moment of realisation about the collection of our data and privacy. As this mass awakening started in 2013, it seems society's reflection is morphing into the resemblance of the dystopian society of 1984 and Brave New World which are novels we were forced to read in our English curriculums but ironically turned out to mirror. The fervour of distrust with data-driven technology seemed to wane a decade later as “Big Brother is Watching You” rings in our consciousness but sometimes becomes a quaint voice nowadays. Like a whisper, it is drowned out by the noises of the ever-growing demand for IT, healthcare, and manufacturing, among other major industries.
With the advent of numerous AI products in recent years, privacy issues have become even more prevalent and urgent. One of the biggest concerns arises from a fundamental goal of AI, namely the urge to improve the quality of AI platforms by allowing them to be exposed to large amounts of data without impinging on personal information. Achieving this goal is surely easier said than done—enabling AI products to amass online data would almost inevitably result in the leakage of private data. Nevertheless, this begs the question of whether our priority should lie in enhancing the quality of AI products or protecting people’s personal data. The coexistence between humankind and machines can only be witnessed by the formulation of transparent and sovereign data through the collective work of society, corporations, and institutions. As it is essential to be aware of the potential for good and bad using this technology, we will be discussing what urgent actions need to be taken at each level to stray from a dystopian society.
How AI Serves Our Society
AI technology has produced many benefits to society ranging from healthcare, manufacturing, and media. Companies like Alibaba, Coupang, and Amazon's success depend on their promise of fast shipping. To do so, they gather a large amount of data which compiles personal information such as customers’ tastes, previous purchasing history, and location information. It is reported that customers who experienced increased delivery time felt reduced satisfaction (Ma 2016) while personalised product recommendations increased user satisfaction (Patnaik 2023). This reflects society’s accustomization to efficiency in the delivery and curating of personal products in the commerce sector utilising AI-driven data. Customers’ privacy or curation preferences can influence what sort of information they will extend at what cost.
Additionally, healthcare involves a different, and even more, personal type of information. For instance, AI uses a large amount of data for genomic profiling. While profiling patient’s genomic data, scientists have discovered a medication called ibrutinib that specifically targets a certain subtype of lymphoma. Using this targeted therapy and genomic profiling, doctors and data scientists have worked together to make advancements in cancer research (Adasme 2020). As a patient would have to expose a part of their DNA in addition to other patients’ genetic code, one could witness the potential of AI using data to create a medication that targets their specific lymphoma or other malignant tumours. Based on these advantages, society has, in a sense, trusted these technologies with their data.
However, to what extent will society extend their personal data to these technologies? It is important to note that individuals will expose their information to some extent knowing the clear and intended use of these technologies. A study conducted at Zurich University revealed that participants were more willing to share their data depending on the reliability of the business (Ackermann 2021). It is important to note that society’s willingness to share data is weighted on a multitude of components that range from the category of industry, the purpose of use, and the inclusion of private identification, among other factors.
Setbacks of AI
As there are benefits of using this profound technology, there are many instances and examples that reveal the misuse of data-driven AI. For instance, Deepfake AI is a type of artificial intelligence used to create convincing images, and audio and video hoaxes. As it is a technology that invades people’s privacy by using a person’s face or image from many different angles it can be easily abused in any sort. The life-like images can be used to spread false information, blackmail, or identity theft. In addition, another leading problem of AI is dark marketing. Dark marketing targets specific individuals with personalised ads without making them visible to the general public, often using data analytics for precision. It often involves using user’s data without their consent, raising serious privacy concerns. Implementing stricter regulations could also be a solution to this issue. Other countless examples of misusing AI extend to violating human rights, fake identities, monopolisation of targeting products, and other harmful intentions. To correctly address the plethora amount of misuses that stem from AI technology, society, corporations, and government institutions have to work together more than ever to mitigate this problem.
Securing the Digital Future
There are multiple ways in which we can prepare for the next era of AI. Governments, corporations, and society can change to adapt to the new world. Governments must try their best to protect and detect any abuse of AI, such as hacking to extract private information. Also, a government sector should be created that monitors AI activities and gets reports from the citizens for any misuse of AI. Not only this, the legal system should be reorganised in order to adapt to the changing world. The current punishment for misusing AI is not strong enough to stop people from doing it. As a consequence, a strict law system must be established. In February 2024, Deputy Attorney General Lisa O. Monaco5 stated: “Department of Justice prosecutors can seek stiffer sentences for offenses made significantly more dangerous by the misuse of AI”. The Department of Justice has already identified the danger of AI and announced a stricter sentence for misusing AI in the present day. As governments are seeking to control the misuse of AI, there should be safeguards set to regulate governments. In the year of 2016, the U.S. Senate passed a bill called the USA Freedom Act which prohibited the government’s surveillance of phone records since Edward Snowden’s exile. Other laws like The Privacy Act that seemingly protect data should be reinforced and strengthened.
Not only should governments strengthen and create more laws to necessitate trust, but also corporations should be transparent about what they do with the data. Users should be informed clearly on websites about whether or not these companies can source the user’s data. In the case where they obtain data, companies should maintain the data in a secure way. There have been laws set in place against corporations to strengthen their transparency. For instance, in 2023, the EU AI Act6 stated a regulation that AI technologies like Chat GPT must “disclose the contents generated and prevent the model from generating illegal contents” which reveals the restrictions that AI companies will endure. Furthermore, corporations should utilize various encryption techniques to secure the data in a safe way.
Thirdly, society should be prepared to adapt to the new and changing system. As everything changes including jobs, daily life, and education, it will become more obscure to distinguish between artificial works and man-made works. Users should be educated on current AI technologies to protect themselves from AI. Furthermore, educational, work and governmental programs should thoroughly implement resources to instruct users on the proper usage and handling of AI technologies. Just as how governments and corporations should have regulations, citizens should have a responsibility to be aware and cautious of their data.
What should we weigh more? Privacy? Development?
Undoubtedly, AI models depend on data amount and quality to deliver salient results. Consequently, their existence will hinge on integrating privacy protection into their design. In the end, a balance needs to be struck between obtaining data and protecting sensitive information. Personally, because AI plays a prominent role in our society, I think it is inevitable that online services and products will obtain large datasets that include personal information to teach and improve AI algorithms. However, we will need to establish safeguards to “improve the acquisition, management, and use of data”7 to ensure that the systems are only using “data with clear and informed user consent.” Ultimately, our end goal would be to develop AI models in a manner that does not infringe on people’s fundamental right to privacy with the work of society, government institutions, and corporations.
BIBLIOGRAPHY
Adasme, Melissa F et al. “Structure-based drug repositioning explains ibrutinib as VEGFR2 inhibitor.” PloS one vol. 15,5 e0233089. 27 May. 2020, doi:10.1371/journal.pone.0233089
Patnaik, Priyadarsini & Nayak, Parameswar & Misra, Siddharth. (2023). Personalized Product Recommendation and User Satisfaction: Reference to Industry 5.0. 10.4018/978-1-7998-8805-5.ch006.
Ma, Siqi. (2016). Fast or free shipping options in online and Omni-channel retail? The mediating role of uncertainty on satisfaction and purchase intentions. The International Journal of Logistics Management. 28. 10.1108/IJLM-05-2016-0130.
Ackermann, Kurt & Miesler, Linda & Mildenberger, Thoralf & Frey, Martin & Bearth, Angela. (2021). Willingness to share data: Contextual determinants of consumers' decisions to share private data with companies. Journal of Consumer Behaviour. 21. 10.1002/cb.2012.
“Office of Public Affairs | Deputy Attorney General Lisa O. Monaco Delivers Remarks at the University of Oxford on the Promise and Peril of AI | United States Department of Justice.” Www.justice.gov, 14 Feb. 2024, www.justice.gov/opa/speech/deputy-attorney-general-lisa-o-monaco-delivers-remarks-university-oxford-promise-and.
“EU AI Act: First Regulation on Artificial Intelligence: Topics: European Parliament.” Topics | European Parliament, 8 June 2023, www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence. Accessed 08 May 2024.
Gravrock, Einaras von . “Artificial Intelligence Design Must Prioritize Data Privacy.” World Economic Forum, 31 Mar. 2022, www.weforum.org/agenda/2022/03/designing-artificial-intelligence-for-privacy/. Accessed 9 May 2024.