Given the press that surrounds it, it's easy to be confused. After all, if you believe everything you read, you think we're already practically in a world controlled by artificial intelligence (AI), and it's only a matter of time before that the machines do not take over.
Except that, well, a quick check of reality will easily show that this perspective is far from the truth. Admittedly, AI has had a profound impact on many aspects of our lives - from intelligent personal assistants to semi-autonomous cars to customer service agents and more - but the overall influence of artificial intelligence is still very modest.
Part of the confusion stems from a misunderstanding of the AI. Through a number of popular and influential sci-fi movies, many people associate AI with a broad and intelligent intelligence that can enable something like the wicked and detestable world of Skynet's Terminator movies. In reality, however, most AI applications today and tomorrow are very practical and, therefore, much less exciting.
Most AI-based activities are still extraordinarily literal. So, if there is an application based on artificial intelligence that can recognize dogs in photos, for example, that's all it can do.
Take advantage of artificial vision based on the artificial intelligence of a drone to notice a crack on an oil pipeline, for example, is an excellent artificial intelligence application, but this is not the case. is not really the case of nightmares inspired by AI. Similarly, there are many other examples of very practical applications that can take advantage of AI's pattern recognition capabilities, but in a real, not only scary but frankly other types of applications. Analytical applications.
Even the impressive Google Duplex demonstrations of their recent I / O event may not be as impressive as they first appeared. Duplex has been specially trained to make appointments and bookings for dinner, not for a doctor's appointment, for a night out with friends or for a host of other real-life scenarios than the phone with voice assistant . calls that the Duplex demo involved was possible.
Most AI-based activities are still extraordinarily literal. So, if there is an AI app that can recognize dogs in photos, for example, that's all it can do. It can not recognize other animal species, let alone distinct varieties, or serve as a general service of detection and identification of objects. While it's easy to assume that applications that can identify specific dog species offer similar intelligence through other objects, this is simply not the case. We are not dealing with a general intelligence when it comes to AI, but to a very specific intelligence that strongly depends on the data that it has been fed.
I stress this to not denigrate the incredible capabilities that the AI has already provided in a wide variety of applications, but simply to clarify that we can not think of artificial intelligence as we do it for human intelligence. . The advances based on artificial intelligence are astonishing, but they are not to be feared as a short-term forerunner of foolish, terrible, and scary things to come. Although I will certainly not deny the potential of creating very unpleasant results from AI-based applications in a decade or two, in the short and medium term they are not only improbable they are not even technically possible.
Instead, we should focus in the short term on the possibility of applying the highly targeted capabilities of AI to real (but not necessarily revolutionary) real-world challenges. It means things like improving efficiency or reducing the rate of outages on manufacturing lines or providing smarter responses to our smart speaker queries. There are also more important potential outcomes, such as more accurate recognition of cancer in x-rays and CT scans, or help to make an unbiased decision about whether to grant a loan or not. to a potential bank customer.
In the end, we will see that artificial intelligence-based applications provide an incredible amount of new features, the most important of which, in the short term, will be to make "real" smart devices smart.
Along the way, it's also important to think about the tools that can help drive an AI experience faster and more effectively. For many organizations, this means a growing concentration on new types of compute architectures, such as GPUs, FPGAs, DSPs, and AI-specific chip implementations, all of which have advantages over compared to traditional processors. applications focused on inference. At the same time, it is essential to examine the tools allowing easier and more intelligible access to these new environments, that it is about software languages such as Nvidia's CUDA platform for GPUs, the National Instruments LabView tool for programming FPGAs.
In the end, we will see that artificial intelligence-based applications provide an incredible amount of new capabilities, the most important of which in the short term will be to make intelligent devices smart. Far too many people are frustrated by the lack of "intelligence" on many of their digital devices, and I expect to see many of the first key AI advances to focus on these applications. basic. Finally, we will also see a wide range of very advanced capabilities, but in the short term it is important to remember that the phrase "artificial intelligence" implies far less than what it first seems .
Bob O Donnell is the founder and chief analyst of TECHnalysis Research, LLC, a technology consulting and market research firm. You can follow him on Twitter @bobodtech. This article was originally published on Tech.pinions.
[ad_2]
Source link