Over the course of the last fifty years, the artificial intelligence research field spurred immense features that are not conceived as AI by the general public. Most of our online endeavors include forms of AI (virtual agents, pattern recognition, targeted advertising). However, all that has been done so far is a mere grain of sand in reference to the predicaments for the sandy future. In order to position ourselves according to these advancements, we need to acquire knowledge on the process.

Business enterprises have become increasingly aware that artificial intelligence can be (and in the future – will be) a definitive factor for success. Currently, these properties are implemented in data analysis algorithms which have the capability to properly store, process and analyze Big Data (another growing sphere of business management) but will soon include product optimization algorithms and complex customer engagement techniques.

Artificial Intelligence: A Complete Guide

© Shutterstock.com | Tatiana Shepeleva

In this article, we are presenting a complete guide to artificial intelligence through sections 1) Origins of AI; 2) Goals of AI; 3) Approaches and Tools; 4) Issues of AI; 5) Application in Entrepreneurship, and 6) Examples of AI implementation in Business.


The Idea and Philosophical Background

Foundations of ideas revolving around the creation of artificial intelligence can be traced back to automatons built by Egyptian and Chinese civilizations as well as to ancient Greek mythology. Implementing human properties to objects and abstract ideas is one of the ways people have been reasoning with their existence from the moment they acquired consciousness.

With the development of logic and emergence of the symbolic reasoning field of philosophy, the creation of machines that could emulate human intelligence became possible to achieve in practice. The symbolic reasoning states that symbols (numbers, graphs, calculations, statistics, etc.) can be used as synonymous substitutes for longer expressions in order to solve problems. The idea was proposed in the 16th century by Thomas Hobbes, who is considered to be the ‘Grandfather of AI’.

Further on, as engineering advanced over the centuries, the two fields begun to correlate. The first computer – Analytical Engine, was designed in the 19th century by Charles Babbage (but it was not built until 1991). With the ongoing progress of technology from the early 20th century onward as well as the increasing necessity of better understanding of processes of computing, various models, and theoretical discourses were created.

The Turing Test

Alan Turing published a fundamental work on the issue in 1950 – the Computing Machinery and Intelligence paper. In the paper, he proposed the Turing machine model through which he discussed the theoretical possibilities of what can be computed. In order to deduct whether the computing possibilities extended to the spheres of human intelligence, he created the Turing test. The objective of the test was to identify whether a machine can convince a suspicious interrogator that it was indeed a human being. The test seemed to be quite simple – no complex assignments (such as creating original art, for example) were involved; in order to pass, the computer was to be able to make small talk with a human being and show understanding of the given context. As simple as it sounds from the perspective of a human, realization of such results proved to be extremely difficult and, up to this date, unachievable. Primary problems were those related to hardware technology of the mid 20th century – storage room issues camouflaged the future issues regarding software realization.

Researchers are still trying to create software that would pass the Turing test and present them on the annual Turing Competition. The Leobner prize of $100,000 in cash is still waiting for the first software to prove to be sentient.

AI – Field of Study

Based on philosophical, logical, mathematical, cybernetic, neuroscience and information technology advancements, artificial intelligence field of study was born in 1956 at a conference at Dartmouth College. Experts John McCarthy and Marvin Minsky became prominent names in the wide-spanning effort to create intelligent machines for the next fifty years.

Naturally, in order to create intelligence one must know what intelligence is. However, the abstract definitions of intelligence as a property of human beings (and some animals) which is manifested in logic, reasoning, learning through experience, appliance of knowledge, creativity and a myriad of other, cannot simply be translated into symbols and produce sentient machinery.

Computer-Chess and Expert Systems

Scientists implemented different approaches and methods so as to build up artificial intelligence. One of the approaches was the evolution of the chess-playing software. Due to the fact that it was much easier to achieve high efficiency through brute force techniques – meaning that the computer computes solution algorithms on the principle of minimal cost for the maximum damage possible for a certain amount of future moves – the chess-playing software did not focus much on building sentient but rather on advanced search techniques and sustainable hardware for large databases.

On the other hand, expert systems were developed so as to provide expert assistance in different industries. By creating a proficient knowledge database and incorporating machine learning software – which enables machines to make predictions and provide consultation regarding given data; as well as interaction software (based on natural language development) – scientists broadened the properties of their ‘intelligent machines’. These achievements are now used in navigation systems, medicine as well as a business.

Winters of AI

After the initial exhilaration with the AI field of research, it soon became clear that solid results are going to take more time than what was anticipated and announced. After ALPAC and Lighthill reports, which showed unsatisfactory advancement in the AI projects (problems with natural language software, slow advancements), the flux of investment was terminated – the first AI Winter begun in the 1974 and lasted until early 1980s when the British government instigated AI projects as a response to Japanese endeavours regarding logic programming. However, in 1987, due to the collapse of the general-purpose computers market and the decrease in funding, the second AI Winter emerged and lasted for five years.

In the ‘winter’ periods, AI research continued under different names which will become sub-categories of the field in the future – evolutionary programming, machine learning, speech recognition, data mining, industrial robotics, search engines and many other.

Where is AI now?

The artificial intelligence research field enabled much progresses which are regarded as ‘common’ nowadays – specified and personalized search engine results, intelligent personal assistant software – Siri, Google Translate, vehicle navigation systems, diverse robotics enhancements and countless other.

Some notable achievements include:

  • IBM’s Deep Blue became the first computer to win a chess game against a chess champion – Garry Kasparov, in 1997.
  • IBM’s question answering system Watson won the Jeopardy quiz against proficient opponents in 2011.
  • Eugene Goostman, a chatbot persuaded a member of the Turing test jury that it was a 13-year-old boy from Ukraine in 2014. However, Eugene passed the bare minimum of conviction with 33%. Such a result is not considered to be a pass of the Turing test in essence because it relies mostly on the external condition (a child from a non-English speaking country can be forgiven for insufficiencies in small talk, while an adult native speaker would not have been). In the course of the 2015, the developers of Eugene are expected to defend their victory and prove that they invented sentient software (which they most probably did not).

As can be noted from all that is stated above, it is clear that hard issues of artificial intelligence have not seen immense progress much in the last fifty years. Consequently, experts predict at least fifty more years of trial and error in order to emulate human intelligence. It is simply too broad and complex of a subject to be resolved in a short period of time. However, the advances that were made during the quest so far have influenced and shaped the world we live in greatly.


The ‘final’ goal of artificial intelligence endeavors is to create an intelligent machine which is capable of reasoning, planning, solving problems, thinking abstractly, comprehending complex ideas, learning quickly and learning from experience (which is an agreed definition of human intelligence). In practice, this artificially emulated intelligence is to reflect a broad and deep ability to comprehend its surroundings so as to figure out what to do in infinite possible situations. In order to adequately position itself in environment, the AI needs to be socially intelligent (meaning that it has to be able to perceive and properly react to a broad specter of abstract features and properties of intelligible universe – for example, emotion). In order to manage problems optimally, it needs to be able to implement creativity in its functioning. All of the stated properties are attributed to the long-term goal of AI studies – general intelligence.

However, in order to achieve such a goal, scientists have to focus on a wide variety of complex concepts that are its building blocks, both individually and in correlation. The builders of the future intelligent machine need to implement in their work the empirical studies of existing intelligent systems (mainly of human beings) as well as results of theoretical exploration and analysis of possible systems of intelligence (and their mechanisms and representations). These factors are essential for resolution of issues related to existing intelligent systems as well as designing new intelligent or semi-intelligent machines. Essentially, this means that a full view of the complexity of the task must be acquired because by restricting endeavors solely to one field (for example, engineering), the efforts will not provide satisfactory results. It would have been impossible to construct airplanes without examination of birds.

Deduction, reasoning, problem-solving

In the beginnings of AI research, the reasoning process was induced through step by step imitation of human processes in solving puzzles or logical deductions. However, this approach depended greatly on computational resources and computer memory that was at the time rather confined. These issues pointed out the necessity of imitation of immediate judgment processes in human beings rather than those of deliberate reasoning. Immediate judgment can be seen as the intuitive, subconscious knowledge which governs the direction of deliberate actions.

AI makes attempts at reaching the goal of immediate judgment through combination of:

  • Embodied Agents (autonomous entities that can interact with environment and are presented as a three-dimensional virtual-simulation/real-robot body);
  • Sensorimotor Skills (combination of perceiving environment through sensors and reacting with motor skills – for example, a robot perceives that a person is approaching and offering a hand as a greeting – the robot reacts by shaking its hand with the person);
  • Neural Networks (simulation of structures and processes in the neural systems, most notably, human brain: computing values from inputs; machine learning; pattern recognition; adaptive nature);
  • Statistical Approaches (mathematical approaches to specific problem resolutions).

Knowledge representation

In order to emulate a human being, AI needs to incorporate immense amounts of knowledge regarding objects, their properties, categories and relations among each other. Moreover, it has to implement situations and states, causes, effects and abstracts ideas. The AI field uses ontological approach to knowledge representation – that is, knowledge is postulated in sets of concepts whose relationship is defined within a domain.


  • Impossibility of true/false statements – everything has exceptions;
  • The width of human knowledge makes creating comprehensive ontology almost impossible;
  • The sub-conscious and sub-symbolic forms of knowledge must be incorporated.


  • Statistical AI – mathematical resolution of certain problems;
  • Situated AI – systems as autonomous entities through interaction with environment develop elementary behaviors;
  • Computational Intelligence – computer that understood enough concepts, so it is able to provide further ontology by itself (via Internet, for example).

Automated planning

AI must be able to construct complex and optimized solutions in multidimensional space and perform realization of these strategies/sequences of action. In other words, intelligent agents need to be able to visualize potential future (predictive analysis), set goals of action (decision making) and perform in a manner which will maximize efficiency (value) of the process.

These goals are to be handled both offline (for the known environment) and online (for unexpected environments). Scientists still have to deal with the issues of unpredicted scenarios – when the machine is expected to react intelligently.

Machine learning

Machine learning is the construction and study of algorithms which allow AI systems to make predictions and decision based on data input and knowledge acquired through it.

It can be focused on:

  • unsupervised pattern recognition in streams of input (for example, defining spam mail from non-spam mail in electronic mail systems);
  • supervised (programmed) classification and relation formation in the input data (for example guiding spam and non-spam mail into different categories in the system).

Machine learning is used in various spheres of information technology such as spam filtering (mentioned as an example above), optical character recognition, search engines personalization, computer vision and data mining (predictive analysis).

Further enhancement of machine learning algorithms should attribute to the overall computational intelligence of machines.

Natural language processing

Natural language processing and generation are one of the central issues which the artificial intelligence field of study deals with. It is no wonder that Turing test revolves around the ability of machines to converse (at least seemingly) conscientiously – a machine that will be able to understand spoken or written words within their context and be able to respond accordingly is something which can be characterized as an intelligent entity (because it involves abstract properties – social intelligence, knowledge, perception, problem-solving, etc.).

Machine Perception

Machine perception represents the capability of input interpretation that resembles processes of human perception through senses. The important issues which are trying to be addresses are those of comprehensive perception, transmission to an intelligent core of the entity and systems of response (that is, machine perception meets difficulties in both engineering and computing features).

  • Vision – collecting information based on the image of the high-dimensional outside world and transforming them to algorithms/solutions for given problems (currently, machines can exercise facial recognition and esthetic judgment but there is a long road of development ahead);
  • Hearing – ability to process audio data such as music or speech (currently: voice recognition, voice translators);
  • Touch – ability to process surface properties and dexterity in order to effectively and intelligently interact with environment.


Goals in robotics combine engineering with artificial intelligence studies and revolve around questions of:

  • object manipulation;
  • navigation;
  • localization;
  • mapping;
  • motion planning.



From the emergence of the AI research in the 1950s, numerous approaches have been undertaken through the implementation of knowledge in diverse industries and academic circles. These approaches evolved as a response to shortcomings that each of them showed regarding the realization of the goal – general intelligence. When the AI research lost funding during the winters of AI, the disintegration of approaches was the only way to acquire investments for continuous studies. What can be concluded from today’s point of view is that all of these approaches are essential to the vast complexities of artificial intelligence and that all of them contributed immensely to the process (no matter how slow or lacking in exhilarating advancements the process itself might be).


Combining techniques and knowledge of neurology, information technology, and cybernetics, scientists achieved a simulation of basic intelligence in the 1950s. The approach was abandoned in the following decade only to re-emerge in the 1980s.


  • sensory processing;
  • behavior of neural networks;
  • knowledge on regulatory systems.


The approach states that human intelligence can be simulated exclusively through manipulation of symbols. It is also called the ‘good old-fashioned artificial intelligence’ – GOFAI and had success in high intelligence simulation in the 1960s – restricted to confined demonstration programs.


  • expert systems

Cognitive Simulation

Cognitive simulation approach is embodied in psychological tests that were conducted in order to acquire knowledge on human problem–solving skills. The results were to be formalized so as to develop programs that would simulate these properties of human intelligence.


  • foundations for artificial intelligence research – machine learning, natural language processing, etc.


Representatives of the logical approach held that human intelligence in its essence spurs from abstract reasoning and problem-solving and can thus be treated with logic’s techniques.


  • knowledge representation;
  • automated planning;
  • machine learning;
  • logic programming.


Opponents of the logic approach stated that no general principle can capture the complexity of intelligent behavior.


  • pointed out the lack of efficiency of the logic approach in matters of machine vision and natural language processing


Knowledge-based approach began to be highly implemented in the artificial intelligence research studies since the emergence of expert systems and the increase of storage capacities of operational systems.


  • implementation into expert systems;
  • one of the crucial elements of general intelligence.


The abstract approach emerged from the necessity of addressing sub-symbolic and intuitive specters of human intelligence in order to provide optimal solutions for problems of artificial intelligence.


  • computer perception;
  • robotics;
  • machine learning;
  • pattern recognition.


Situated or novel artificial intelligence approach focuses on basic engineering problems and rejects the exclusivity of the symbolic approach. The goal is to construct a realistic machine that can exist in the real environment.


  • motor skills;
  • sensory skills;
  • computer perception.


Statistical approach uses measurable and verifiable mathematical tools and combines them with economics in order to solve specified problems. The approach is criticized in the matter of disregard towards the goal of general intelligence.


  • successful addressing of particular problems


Artificial intelligence field of study has encountered infinite problems in its quest for realization. However, it implemented diverse methods through which problems can be successfully addressed.

Search and Optimization Method

Searching for many possible solutions, eliminating those which are unlikely to lead to the particular (or overall) goal and choosing an optimal pathway can be an efficient way of resolving issues. Reasoning, planning and robotics algorithms are created with the assistance of search techniques based on optimization.

Mathematical optimization theory is formed by beginning the search for solution with an intelligent guess and advancing towards its refinement (also referred to as ‘hill climbing’: choosing a random point in the landscape and advancing in random moves towards the hill top).

The evolutionary computation follows the ‘survival of the fittest’ principle – a series of guesses is postulated, through refinement some of the guesses fall of, and thus the optimal solution presents itself.

Logic as a Solution Method

Logic is used for solving problems regarding automated planning and machine learning, as well as those of logic programming. It is used for determining validity through true/false attribution, expressing facts about objects, their properties and relations which is essential for ontology in knowledge representation.

Other Methods

  • Probability algorithms for filtering and predictive analysis of streams of data;
  • Classifiers and statistical learning methods;
  • Artificial neural networks;
  • Programming languages (differ according to specific needs of a sub-category of AI).


Most researchers in the artificial intelligence field state that general intelligence in machines will be achieved in the course of the following fifty years. Although we cannot confirm such statements, it seems plausible that the advancements will happen, and will change the world entirely. Consequently, various issues are bound to arise.

Primarily, AI systems have capabilities of data processing and predictive analysis which surpass those of humans greatly. In order to achieve optimal performance, they are somewhat autonomous, governed by a carefully chosen set of rules in order to reach a goal of a sort. However, due to their autonomy they can perform in a misbalance regarding their users – if a potential problem was not addressed in the programming, the system would undertake it – if it serves the goal (and it is impossible for humans to predict all possible situations and adequate algorithms for them). The issue must be addressed by providing clear safety criteria in order to minimize damage if an error occurs. Moreover, proper attribution of responsibility is a question that needs to be addressed regarding artificial intelligence endeavors.

Further on, as the general intelligence emerges, humans must define moral systems according to which they will structure the AI systems but also the moral rules according to which they will position themselves in relation to AI systems. The questions of ethics in artificial intelligence are impossibly complex – how to define whether a system is programmed to behave and claim sentient or sentient?

Additionally, who is going to be in charge of decision-making regarding general AI? While we are all introduced to the positive and advanced opportunities that AI technology will bring – termination of disease, space travel, reduction of work, etc., we seem to forget that humans are capable of massive destruction for power and money acquisition. Obviously, some regulations on the usage of AI systems will have to be made.


Big Data and Specialized Analytics

Over the past few years, the exponential growth in technology capabilities (those of storage and computing primarily), the influx of data has increased enormously. Today, companies can collect and process Big Data in structured and unstructured (pictures, videos) forms and analyze it so as to attain valuable insights regarding business strategy. One of the issues of Big Data management is the lack of experts that could make sense out of it and put it into practice. Various software solutions have been presented to simplify the process – such as expert systems and predictive analysis. Obviously, these are products of artificial intelligence studies.

However, as the algorithms evolve, so will their influence on data managements. Machine learning is a data-based predictive and decision-making algorithm that can, when combined with natural language processing, present usable (and valuable) information and solutions regarding business strategies (advertising, customer relations, coaching employees) with the overall goal of increasing productivity and customer engagement (satisfaction), competitiveness on the market and growth.

Optimization of Products and Services

Artificial intelligence algorithms will be implemented not only in the business management spheres but also in the product efficiency and desirability. For example, lawn mowers will be able to lawn mown without human participation. Moreover, they will be able to perform specialized and personalized constructive tasks such as not pulling out flowers. All this will contribute to customer satisfaction because it represents a continuous exponential decrease of time and effort requirements from the customer for maximized efficiency and value.


In addition to significant efforts of IBM in artificial intelligence since its beginnings, big companies such as Google and Facebook had to attend to AI possibilities as well because of massive amounts of data and complex management and strategy defining processes. Here we will take a look at these three companies and their entanglement in AI.


In addition to a significant success which IBM received publicly with their endeavours in AI technologies such as Deep Blue chess-playing algorithm and the complex Watson system, the actual benefits lie in the properties which their technologies mastered and their implementation in business. Deep Blue algorithm managed to process an enormous amount of predictive analysis based on maximizing efficiency according to the rules of chess and showed that by clear formulation of goals, there is no need (as it would be impossible) to cover possible solutions manually – the computer did it autonomously and, restricted to the objective that it was programmed for, optimized in such a manner that even a chess champion could not override the process.

The Watson system was developed as a real-time question and answer algorithm that managed to perceive and process natural language as well as reason correct answers and generate them in the natural language – won the Jeopardy quiz while operating offline. It was created on machine learning basis because it would be a time-consuming and possibly non-effective approach to implement ontology of vast knowledge into it manually.

These advancements are extremely significant for business strategies as they optimize broad processing of relevant content and enable constructive communication in order to present insights and perform decisions based on these analytical processes.

Currently, IBM is focused on implementing their algorithms in a cloud-based environment and creating databases for health-care, business and education.


Google has been using artificial intelligence features for personalization and specification of their search engines, developed Google Translate which is a sufficient natural language processing and generation tool (aside from its lacking in matters of context and sub-symbolic meanings) as well as implemented a neural network strategy in management of their immense databases. These neural strategies are designed to recognize patterns and make decisions upon them extremely fast. Also, the machine learning algorithms are included which means that systems learn through experience and as such perform more effectively.


Facebook profiles are a melting pot for structured and unstructured data: friends lists, pages liked, groups joined. In order to optimize customer experience, Facebook implements artificial intelligence to recognize behavioral patterns of individual users (on the Facebook domain, as well as online in general, ) and offers according to particular inclinations and interests. Their efforts are heading towards creating an intelligent agent who will be able to interact with users and provide valuable information instantaneously.

Considering Moore’s theory of exponential growth of technology and knowledge, we can predict that the science fiction depictions of future are actually right around the corner, especially if we are taking the complexity of the objectives into consideration. Although there are numerous issues regarding AI realization and ethical conundrums regarding diverse specters of AI, the progress is happening and will bring a lot of positive features with it. In business, it will enable strategies designed for individual users – increasing their satisfaction and profit generation for the enterprise. It will have even more far-reaching consequences in medicine, sustainable economies, poverty reduction and education. We should only hope that the progress will always serve its altruistic purposes.

Comments are closed.