Mind and Machine: Understanding AI's History, Functionality, and Ethical Implications
Introduction
In the annals of human history, certain technological advancements have emerged as catalysts of profound change, reshaping the very fabric of society and altering the course of civilization. The Industrial Revolution, a seismic event that propelled humanity into an era of mechanization and mass production, stands as a testament to the remarkable capabilities of human ingenuity. However, like all transformative forces, this epochal shift was met with resistance and apprehension from those unprepared for the winds of change.
Fast forward to the present day, and the world finds itself on the precipice of yet another revolution, one that has the potential to rival the Industrial Revolution in scope and impact – the era of Artificial Intelligence (AI). While AI is heralding unprecedented opportunities and solutions, it is also treading a path analogous to its industrial predecessor, with humanity once again grappling with apprehension and resistance.
From Ayn Rand's objectivism to Immanuel Kant's Categorical Imperative, AI can unravel complex philosophical ideas in no time, democratizing knowledge acquisition for all. In this blog, My main aim is to explore this humongous change in a multitude of aspects. First, we explore the captivating history behind AI, uncover its intricate workings, and witness how AI tools seamlessly integrate into our daily lives. But that's not all – we'll also confront the moral and ethical dimensions of this revolutionary technology. From industries to personal lives, AI is reshaping everything. Welcome to the forefront of innovation, where the AI revolution is in full swing!
History of AI
The concept of AI has an early origin, dating back to 1956 when the American scholar John McCarthy first proposed it. The concept of AI originated in the work of the American scholar John McCarthy, who first proposed it in 1956. However, the poster boy of Artificial Intelligence has always been Alan Turing. British polymath, Turing, is famously regarded as the father of Artificial Intelligence. Turing posed questions regarding making computers think as humans do. He published several research papers regarding his ideas and developed the Turing Machine that could decipher secret codes from the machine called ENIGMA, which was used by Nazi armies during World War 2. Unfortunately, Turing could not further actualize his interests for two major reasons.
Before 1949, computers could not store commands; they were used only for processing a given set of inputs. The processing capacity of these primitive machines was minimal. Only prestigious universities and multinational companies can afford the extravagant processing and storage requirements of Turing’s dream. Five years later, in 1956, the Dartmouth Conference marked the field’s formal birth. Starting from that until 1960, researchers developed some first AI programs, such as Logic Theorist and General Problem Solver (GPS), which demonstrated the ability to solve certain logical and mathematical problems. This early progress was stunted for almost 20 years due to the lack of expected throughput from the technology and was used sparsely in the finance and healthcare fields.
Not until the early 1990s, interest in Artificial Intelligence rejuvenated, and the use of machine learning algorithms further cemented the popularity of revolutionary technology. Interest in Artificial Intelligence did not experience rejuvenation until the early 1990s when the use of machine learning algorithms further solidified the popularity of this revolutionary technology. The 21st century has brought significant progress in AI, especially in natural language processing, computer vision, and robotics.
Machine learning techniques, such as deep learning and reinforcement learning, have made breakthroughs in diverse applications, including image recognition, speech synthesis, and playing complex games such as Go and chess. Breakthroughs have been achieved in diverse applications using machine learning techniques, such as deep learning and reinforcement learning. These advancements have paved the way for remarkable achievements in image recognition, speech synthesis, and even the playing of complex games such as Go and chess. Recent advancements in computing power, data availability, and new algorithms have led to breakthroughs in artificial intelligence and machine learning.
How AI works?
Before jumping to various AI tools, let us first look at the building blocks of Artificial Intelligence. The true AI, which we imagine where computers will replace humans, is decades away, as per experts. The models currently used in AI tools employ several concepts that are already privy to.
Machine Learning ➖
“It is how the computers recognize patterns and make decisions without being explicitly programmed.”
Machine Learning can also be termed as the AI currently in use. All AI tools use Machine Learning to make decisions. With Machine Learning, a computer can be programmed using trial and error rather than the standard algorithmic step-by-step approach.
Machine Learning learns to recognize patterns from data ( lots of it) that are being trained on. It takes data from various sources, ranging from Audio, Video to text, and recognizes patterns to make predictions.
Machine learning is like human beings in learning; humans meet new people or scenarios to learn from them and become better at their judgment, while machines meet new data to do the same.
Data and Bias ➖
“Machine Learning is only as good as the data(training) used in it. ”
We observed that Machine Learning models collect data and learn from them to make predictions. Therefore, the quality and variety of data passed into the training phase are extremely important. Training data generally comes from users and their choices or is sometimes taken from the Internet.
It is important to consider data from various sources to ensure optimal performance. For example, if a model is trained to identify fractures using only male X-ray data, it cannot identify the problem when given female X-ray data. This is called a blind spot, which can create bias in the data.
Biased data are classified as favoring certain things and de-prioritizing or excluding others.
Therefore, before training a machine with data, we must crosscheck two things.
Does this data represent all possible scenarios and users without any bias?
Is this data sufficient to accurately train the system?
In summary, for machine learning, data is the code upon which relies on the efficiency of the system; therefore, it is our responsibility to provide unbiased data.
Neural Networks ➖
“It is the secret behind how a computer process information and recognizes patterns.”
The name neural networks come from the neurons that the human brain possesses. Neurons are interconnected and have two parts: one to take the input and the other to process the output. Neurons are extremely important, as they aid in making decisions based on our experiences.
Neural Networks and their structures are inspired by the human brain, mimicking the way biological neurons signal to one another. Artificial neural networks (ANNs) are composed of node layers containing an input layer, one or more hidden layers, and an output layer. Each node or artificial neuron connects to another node and has an associated weight and threshold. If the output of any individual node is above the specified threshold value, the node is activated and sends data to the next layer of the network. Otherwise, no data are passed along to the next layer of the network.
Neural nets are a means of performing Machine Learning to perform tasks by analyzing several training examples. For instance, an object recognition system would be fed with millions of images ranging from cars, houses, coffee cups, etc., and it would find visual patterns in the images that consistently correlate with particular labels.
Neural Nets consist of thousands or millions of processing nodes that are densely interconnected. They work in a “feed-forward” fashion, in which data moves in only one direction.
To each of its incoming connections, a node will assign a number known as a “weight.” When the network is active, the node receives a different data item (a different number) over each of its connections and multiplies it by the associated weight. It then adds the resulting products, yielding a single number. If this number is below the threshold value, the node passes no data to the next layer. If the number exceeds the threshold value, the node “fires,” which in today’s neural nets generally means sending the number — the sum of the weighted inputs — along all its outgoing connections.
When a neural net is trained, all its weights and thresholds are initially set to random values. Training data is fed to the bottom layer (the input layer) and passes through the succeeding layers; it is multiplied and added together in complex ways until it finally arrives, radically transformed, at the output layer. During training, the weights and thresholds were continually adjusted until the training data with the same labels consistently yielded similar outputs.
In summary, today’s Artificial Intelligence uses Machine Learning to make predictions that are based on the unbiased and varied data that it is being trained, and these predictions are made by following several mathematical functions devised in neural networks.
Popular Artificial Intelligence Tools
Chat GPT
ChatGPT is a natural language processing tool driven by AI technology that allows users to have human-like conversations with the chatbot. The language model can answer questions and assist you with tasks such as composing emails, essays, and code. Created by Open AI, the Chat GPT was launched on November 30, 2022.
The impact of chat GPT on its users is so large that it only took the company two months to reach a 100 million user base, while other giants like Tik Tok took nine months for the same.
Bard AI
Bard is Google's experimental, conversational, AI chat service. It is meant to function similarly to ChatGPT, with the biggest difference being that Google's service pulls its information from the web. Launched on March 21, 2023, the AI chat service was powered by Google's Language Model for Dialogue Applications (LaMDA), which was unveiled two years prior.
Google is the largest software company, with products ranging in a multitude of domains. The exclusive user base can boost its experiment and aid the company as a tough competitor in the Artificial Intelligence era.
AI Art Generators
AI art generators take a text prompt and, as best, can turn it into a matching image. Since your prompt can be anything, the first thing all these apps have to do is attempt to understand what you are asking. To do this, AI algorithms are trained on hundreds of thousands, millions, or even billions of image-text pairs. This allows them to learn the differences between dogs and cats, Vermeers and Picassos, and everything else. Different art generators have different levels of understanding of complex text, depending on the size of their training database.
They used diffusion models and Generative Adversarial Networks (GANs) to render the resulting image. Popular art generators include DALL E 2, Imagine AI, and Pixray.
Tensor Flow
Tensor Flow is an open-source library launched by Google in 2015, which supports large-scale machine learning, deep learning, and other statistical and predictive analytics. This is useful for developers to implement machine learning models in a much easier way, as it considers several important aspects, such as acquiring data, serving predictions at scale, and refining future results.
It can train and run deep neural networks for tasks such as handwritten digit classification, image recognition, word embedding, and natural language processing (NLP). The code contained in its software libraries can be added to any application to help it learn these tasks.
Moral and Ethical Perspective of Artificial Intelligence
AI raises profound moral and ethical considerations that need to be carefully addressed. As AI becomes more integrated into our everyday lives and impacts various sectors, it is essential to address the moral and ethical implications of this technology. In a world where rapid digitization has become the general norm, and every action is transformed into a set of algorithms, it is important to define morality in the context of Artificial Intelligence. Ethical considerations in AI include issues such as privacy, fairness, transparency, accountability, and bias. These concerns need to be addressed to ensure that the development and deployment of AI are performed responsibly and in the best interest of humanity.
Let us consider an example in which human conscience saved the entire world from World War III. Stanislav Petrov, a lieutenant colonel in the Soviet Air Defense forces of the Soviet Union was on duty Sept. 26, 1983, when the early-warning satellite system he was monitoring detected what appeared to be five approaching U.S. nuclear-armed intercontinental ballistic missiles. Petrov was faced with a critical choice that had to be made immediately: treat the warning as a false alarm or alert his superiors, who would likely launch a counterattack. Petrov waited for 12 min and went with a false alarm; later, he explained that if the United States were to start a nuclear war, it would do so with more than five missiles. He was correct. The satellites mistakenly reflected the sun-off clouds to attack missiles. A mistake from satellite technology could have cost the world another catastrophic disaster, but thanks to Petrov’s reasoning and for 12 minutes, the entire humanity vacillated between progeny and extinction.
Artificial Intelligence cannot afford to commit such a mistake that could change the course of history forever, and necessary control must be enforced. The use of AI to generate morphed and fake stories to manipulate situations has already begun. If not controlled, this can distort reality and fact-checking mechanisms. Thus, proper regulation must be enforced on the companies responsible for creating such software, and curtailing the use of this technology to a certain extent can never stop but postpone incoming disasters.
Conclusion
Inevitably, whether we like it or not, Artificial Intelligence has firmly entrenched itself in our lives, and its influence is only set to grow. Much like the Industrial Revolution replaced manual labor with machines, AI is poised to revolutionize the way we work and live. AI will indeed bring forth job displacements as certain tasks become automated, causing concern among many. However, history has shown that with every technological shift, new opportunities emerge, and AI is no exception. As some jobs fade away, a plethora of new roles will arise, demanding skills and expertise in fields we may not have even envisioned yet. It is our collective responsibility to adapt, upskill, and embrace this transformative wave, ensuring we capitalize on the new and exciting job prospects AI will create. Let us be prepared to accept and move forward, shaping this AI-driven world to benefit all of humanity. Together, let us embrace the future that AI holds and pave the way for a brighter and more promising tomorrow.
Subscribe to my newsletter
Read articles from Harsha Vardhan Mirthinti directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
Harsha Vardhan Mirthinti
Harsha Vardhan Mirthinti
I am a graduate student in Computer Science from Arizona State University. I am actively looking for 2024 summer internships and other tech related opportunities to enhance my skills. My interests pertain to AI, Machine Learning, Spring Boot and Django development. I also read philosophy and enjoy conversations in that arena. If you want to get lost in existential nihilism or dialectic materialism let's connect !!!