The OpenAI laboratory has created a neural network which it considers very dangerous
Artificial-intelligence

The OpenAI laboratory has created a neural network which it considers very dangerous

adminNovember 11, 20193min90
adminNovember 11, 20193min90
Artificial-intelligence
Neural network OpenAI research laboratory has opened access to the full version of the neural network “GPT-2”, designed to generate meaningful text on any topic. It was ready already in February, but the developers were so impressed with the results of the activity of their invention, that they were simply afraid to release it into […]
Neural network

Neural network

OpenAI research laboratory has opened access to the full version of the neural network “GPT-2”, designed to generate meaningful text on any topic. It was ready already in February, but the developers were so impressed with the results of the activity of their invention, that they were simply afraid to release it into the world. Several cutbacks of artificial intelligence were presented to see how the Internet community would react to them and, most importantly, how it would apply them.

The GPT-2 neural network has been trained in 8 million texts from the Internet and is able to quickly and accurately recognize the essence of what is written to draw conclusions and continue the text. For example, it has a catchy title enough to write “sensational” news that many people will think is true. Artificial intelligence knows how to work with literary techniques, with technical texts, writes poems and can support the conversation, making detailed answers to questions.

Experts’ fears were caused by how persuasive the texts from GPT-2 look. Artificial intelligence is not able to lie literally, it has no malicious intent, but it skillfully juggles with words to make up powerful sounding phrases. Of course, the neural network also has enough vulnerabilities – for example, it can’t build a long story, it only works with small texts. Or it can make a gross mistake by misinterpreting the name of an unknown object.

As a result, the decision was made on the principle of “Fight fire with fire” – instead of hiding the GPT-2, the developers gave full access to artificial intelligence, so that everyone could personally test the neural network. The more people get to know it, the more knowledgeable and thus less vulnerable they will become. And then the use of artificial intelligence for selfish purposes will no longer have such destructive consequences.

Share it

Leave a Reply

Your email address will not be published. Required fields are marked *