How smart is too smart? According to Elon Musk, artificial intelligence will become too smart too soon. He is calling for government regulations, but he isn’t exactly a Cassandra.
Musk provided an audience at the International Space Station R&D Conference with an example of technology that is too smart: DeepMind’s AlphaGo. The machine’s intelligence isn’t the concern so much as the speed at which the machine was able to learn and apply its knowledge. AlphaGo is years ahead of schedule and this lack of predictability is disturbing. It indicates that AI is unpredictable and if it goes on “learning” at rates that are faster than developers anticipate it can cause a “tidal wave” of problems.
Musk’s claims aren’t dramatic. He put his concerns in context by comparing AI to other technologies that had been revolutionary at the time – including air travel and telecommunications. Both of those technologies evolved alongside government agencies, like the FAA and the FCC, that monitored and controlled their development. Musk suggests that a similar agency needs to be developed now so that AI developers have limits to what their bots can accomplish.
The idea of tighter regulations seems to go against entrepreneurial innovation and Musk is aware of his own contradictions. As an entrepreneur and inventor he often rails against government regulations but he does see how they are necessary. He is calling for regulations now because he knows that artificial intelligence is growing more sophisticated and powerful each day. By the time the government catches up, Musk warns that artificial intelligence is, “…going to come on like a tidal wave.”
The digital revolution includes a lot of changes for artificial intelligence. It is shaping business, fashions and products for the future. The web is always full of predictions for the future especially regarding new technologies. Artificial intelligence is playing a role in driving recommendations, customer service and search responses. The idea is to go back in time to when business owners knew their customers. Modern technology is improving the value received by today’s shoppers in the real world and on the internet.
Artificial Intelligence is being used for chatbots. This is a customer service tool being used by many of the popular brands for beauty and fashion. It is primarily used in the messaging space and has been successful in converting new customers. Emphasis is being placed on a style of shopping without the friction. Todays customers expect to get exactly what they want easily and quickly including when they pay for their purchases. The Internet of Things is connecting stores, fitting rooms, checkouts and more. This is the level of convenience consumers want and are beginning to expect. For even more information on the future of technology please visit https://www.forbes.com/sites/rachelarthur/2016/12/19/8-tech-trends-that-will-shape-the-future-of-fashion-and-luxury-retail-in-2017/#43416dfd7615.
The Internet of Things will require a voice interface. This is becoming a huge platform and Apple, Microsoft, Google and Amazon have already become major players. This is about so much more than having an impact of consumer goods. Fashion brands will be able to experiment and find a potentially new future for their brands. A prediction has been made by Mary Meeker of KPBC that by the year 2010 at least half of all web searches will be using either image or voice searches as opposed to text. This will be more convenient and efficient than typing and the reason why voice interfaces are creating a new world for humans interactions with computers.
Many areas of our lives are already touched by Artificial Intelligence (AI). When we receive recommendations from Amazon.com, Ebay or any of several other online merchants, we don’t always realize that each of our interactions with their websites is leaving behind a silent digital footprint that can be parsed, categorized and matched to other merchandise that we may not have even viewed. Then the upsell merchandise is automatically offered to us at staged intervals during our shopping experience for consideration. The AI software is even able to learn more about our preferences and shopping habits by our responses to its offerings.
Forecasts by BofA Merrill Lynch Global estimate a $70 billion market for AI by 2020. The value of AI to the economy is predicted by some to double and boost productivity in the work-place by upwards of 40 percent in less than 20 years.
AI is already proving its incalculable value in health-care by reducing errors in medical diagnoses by 85 percent. The dramatic impact of AI’s use in clinical settings is already being demonstrated with machine learning algorithms being used to locate anomalies in MRI scans that even highly experienced physicians have been unable to detect. The untapped value of these intelligent computing algorithms in medicine lies in their being able to introduce higher levels of precision in treatment and diagnosis at far lower costs.
In one demonstration of the power of AI in marketing promotions, the audience at the PegaWorld Business and IT Conference watched the interaction between an intelligent Coca Cola machine and a consumer. The machine interacted with software running on the patron’s cell phone and when he was within 15 feet, the machine sent an alert to the customer’s phone informing him that there was a Coca Cola machine in close proximity that had his favorite drink, Coke. The customer walked up to the machine, paid for and was dispensed the cola drink without exchanging any cash currency.
The next major revolution that will leave a lasting and ongoing global impact will come from intelligent software running on computerized devices. As the line blurs between computer-intelligence and human-intelligence, interactions with AI software running on computers will assume larger roles in our daily human experiences.
It is difficult to understand how something so obvious has evaded us for so long, but computer scientists are beginning to develop visual technologies that will enhance artificial intelligence. Fei-Fei Li, who directs the the artificial intelligence lab at Stanford University, observed that in the Cambrian Era a variety of species surged because they adapted eyes. Visual stimuli allows beings to take in their environment, name objects, navigate obstacles, sense predators, seek prey and, most essentially, to learn. By empowering artificial intelligence technologies with cameras, computer scientists are on the verge of creating another surge of evolution, this time in the tech realm.
The idea is a tad bit creepy, but powering inanimate objects with cameras will assist them in data gathering. This visual data can then be used to improve user experiences across a range of industries. TechCrunch contributor Evan Nisselson provides an example of this future tech in action within the e-commerce sphere. If your home is replete with cameras that are able to provide an in-home digital assistant with visual data, then the assistant can tell you when the back pocket of your pants is wearing out and then, when prompted, can order a new pair of pants. Think of it like the Amazon Dash button only there is no need to push anything. The assistant will be able to assess a visual scenario and arrive at a conclusion, much like a human would.
Like most technologies, consumer applications are fairly predictable. Visual data put to use within industrial and business realms will most likely provide more innovative developments than clothes shopping. Considering that cameras will be capable of seeing more than the human eye can capture, such as thermal activity, x-rays, ultrasound, white light and other visual data that goes unnoticed by the human eye, then visual intelligence holds the power to fill in human visual gaps. The result will no doubt increase manufacturing productivity and business analytics in unpredictable ways.
Brief hesitations and differences in hand movements between truth-tellers and fake responders could help stop identity thieves in the future. Italian researchers developed a machine-learning algorithm based on a series of 12 questions provided to volunteers that mirrored the type of security questions often provided by financial websites with one twist: they included unexpected questions like “What zodiac sign are you?” The element of surprise allowed them to discover differences in how thieves respond vs. genuine customers. Later, the AI was tested to see if it could determine who was being honest or deceptive in an additional question set.
A group of 40 respondents were provided with a series of questions that included typical questions like “Do you live in Padua?” or “Are you Italian?” These questions were regarded as easy to answer for identity thieves, but more unusual questions were included, which enabled researchers to plot trajectories of mouse movements representing truthtelling and falsification. The reason for the difference in movement was the extra time required for liars to compute the correct answer. “The uncertainty in responding to unexpected questions” led to errors, which in turn, allowed the researchers to create the algorithm. The algorithm has been able to determine fake from real responses with 95 percent accuracy.
The study was inspired by a data breach of the U.S. Internal Revenue Service which resulted after hackers used personal information from thousands of Americans to guess security questions provided on the IRS website. The typical questions included “Which of the following streets have you lived on?” and “What is your monthly mortgage payment?” After success in guessing these answers, the identity thieves gained access to thousands of personal tax returns. The tax returns gave more than adequate information, including social security numbers and salary details, for many types of identity theft and credit fraud.
Science fiction movies with robots that think, act, and talk like humans were thought of as pure fiction. Today, the idea of AI or artificial intelligence is a reality. Recently, Professor Newton Howard from the University of Oxford created an artificial brain that is in the form of a high bandwidth neural implant. This is a great advancement for artificial intelligence and technology. Now, the Oxford professor is eagerly setting his sights on testing the artificial brain on rodents. His work has produced several patents on the advanced technologies and algorithms that are responsible for powering the artificial brain.
The new device works in hand with the human brain’s unique internal system. This new artificial intelligence is much more advanced than previous technologies that merely provided a type of electrical stimulation to the brain. The device is thought to have great therapeutic use for millions of patients that suffer with various brain disorders. It is thought that the device will provide relief for patients that are suffering with a brain trauma, degenerative diseases, neurodegenerative disorders, and much more. The brain is responsible for control over all the internal organs in the body. This device helps the brain to function at optimum power. Thus, the device has great power over disease and keeping the body healthy. Many in the medical world think that the merging of man with machine is inevitable. Certainly, it is more than just science fiction. Today, it is actually science fact.
This surge in neuroscience and advanced learning has led to renewed interest in the field. Today, more and more people are getting into the field. Professor Newton Howard from the University of Oxford states that he is very happy to see more people joining him in this field and hopefully work together to solve more challenges.