AI is one of the largest and most users I have ever seen in technology. I can’t find the Internet without offering many advertisements about potential AI products, including the one who is still requesting my devices to run. AI can be everywhere where we see in 2025, but now the neurological network associated with it is slightly larger. Such AI was actually being defeated by the 1950s, though it did not happen until 2012 that we saw that it had eliminated the current generation of machine learning with Alexnet. A photo -identified boot whose code has just been released as an open source by the Google and the Computer History Museum.
We have seen many different AI ideas over the past years, but this term is commonly used in terms of computer or self -learning skills. Although this concept has been talked about by science fiction authors since the 1800s, it is far from being fully understood. What we call AI today refers to language models and machine learning, as contrary to individual individual thinking or reasoning by a machine. Such deep learning techniques are primarily feeding large sets of DATA data to train the computer for specific tasks.
The idea of ​​learning deep is not new. Researchers like the 1950s, such as Frank Rosen Bluet, had already created an easy machine learning neural network that we have today. Unfortunately, this technology did not catch the idea to a great extent, and it was largely rejected. Until the 1980s, it was not that we really saw the machine learning once again.
In 1986, Geoffry Hunton, David Rommel Hart and Ronald Jay Williams published a dissertation around back propaganda, an algorithm that applied the appropriate weight to the cost -based nerve network response. He was not the first to raise the idea, but the first one who managed to make it popular. Back propaganda as a machine learning idea was picked up by many, including Frank Rosen Bluet, in the early 60s, but could not really be implemented. Many people also support it as the implementation of China’s principle machine learning, for which the initial written attribution is in 1676 to Got Freud Wilhelm Libnies.
Despite the promising results, this technology was not enough to make such a deep learning viable. To bring AI to the level today we see today that we need more data for their training, and very high level computational power to achieve it.
In 2006, Professor Fifi Lee began construction of the imagination at Stanford University. Lee imagined a database that contained an image for every English noun, so he and her students began to collect and classify photos. They used a Word Net set up of words and relationships to identify the images. The work was so big that the freelancers were finally outsourced unless it was realized that the largest dataset of its kind in 2009.
This was when Nvidia was working on the CUDA programming system for its GPUS. It is a company that only tightened on AI in 2025 KGC, and even using tech to help people learn the language of indicator. With CUDA, these powerful computing chips can only be easily programmed to deal with things in addition to visual graphics. This gives researchers the opportunity to start implementing the nerve network in areas such as the identity of speech, and in fact gives success.
In 2011, two such students, Elias Suttsciver (who went to Openi’s co -founder) and Alex Krezifoski, began working as Alexenerate. Suscore looked at the abilities with his previous work, and agreed to squeeze GPUs for the training of this nervous network, while Hunton worked as a principal investigator. Next year, Krizhoski trained, tweeted and re -trained the system on the same computer using two NVIDIA GPUs with his coda code. In 2012, the three released a dissertation that Hinton also presented at a computer vision conference in Florence.
Hunton summarized CHM’s experience because “Eliya thought we should do it, Alex worked it, and I received the Nobel Prize.”
At that time he did not make much noise, but Alexant completely changed the direction of modern AI. Prior to Alexent, nerve networks were not common in these developments. Now, they are mostly framework of anything in the name of AI, ranging from nervous system robot dogs to miraculous headset. Since computers are more powerful, we are just ready to see more of it.
In view of how big the Alexnet has been for AI, the issuance of the CHM source code is not only an amazing head, but it is also wise to ensure that this information is freely available to everyone. To ensure that this was done fairly, correctly and above all, CHM reached Alexnet’s names, Alex Krizhoski, who kept him in touch with Hunton who was working with Google after receiving. Now, the machine is considered to be one of the father of learning, Hunton was able to connect the CHM to the right team in Google, which started the five -year negotiation process before his release.
This may mean that the Alexnet available to everyone available on the Gut Hub may have somewhat sanitized version, but that is also true. There are many similar or with the same name, but they are likely to pay tribute or interpretation. This upload is described as the “Alexnet Source Code as was in 2012” so it should work as an interesting marker in the AI ​​way, and whatever shape he learns in the future.