11/12/2023

Headline, November 13 2023/ ''' GALAXY BRAINS GALORE '''


''' GALAXY BRAINS

 GALORE '''



COMPUTING POWER AND TRAINING A.I. SYSTEMS : In artificial intelligence, a team at Google, meanwhile, has come up with a different option for those who can get by with smaller models.

This approach focuses on extracting the specific knowledge required from a big general purpose model into a smaller, specialised one.

The big model acts as a teacher, and the smaller one as a student. The researchers ask the teacher to answer questions and show how it comes to its conclusions.

Both the answers and the teacher's reasoning are used to train the student model. The team was able to train a student model with just 770 million parameters, which outperformed its 540 billion-parameter teacher on a specialised reasoning task.

Rather than focus on what the models are doing, another approach is to change how they are made.

A great deal of AI programming is done in a language called Python. It is designed to be easy to use, freeing coders from the need to think about exactly how their programs will behave on the chips that run them.

The price of abstracting such details away is slow code. Paying more attention to these implementation details can bring big benefits.

'' This is '' a huge part of the game at the moment'', says Thomas Wolf, chief science officer of Hugging Face, an open-source A.I. company.

LEARN TO CODE : In 2022, for instance researchers at Stanford University published a modified version of the ''attention algorithm'', which allows LLMS to learn connections between words and ideas.

The idea was to modify the code to take account of what is happening on the chip that is running it, and especially to keep track of when a given piece of information needs to be looked up or stored.

Their algorithm was able to speed up the training of GPT-2, an older large language model, threefold. It also gave it the ability to respond to longer queries.

Sleeker code can also come from better tools. Earlier this year, Meta released an updated version of PyTorch, an AI-programming framework.

By allowing coders to think more about how computations are arranged on the actual chip, it can also double a model's training speed by adding just one line of code.

Modular, a startup founded by former engineers at Apple and Google, last month released a new AI-focused programming language called Mojo, which is based on Python.

It too gives coders control over all sorts of fine details that were previously hidden. In some cases, code written in Mojo can run thousands of times faster than the same code in Python.

A final option is to improve the chips on which that code runs. GPUS are only accidentally good at running AI software - they were originally designed to process fancy graphics in modern video games.

In particular, says a hardware researcher at Meta, GPUS are imperfectly designed for ''inference'' work [ie, actually running a model once it has been trained].

Some firms are therefore designing their own more specialised hardware. Google already runs most of its AI projects on its in-house ''TPU'' chips. Meta, with its MTIAS, and Amazon, with its Inferentia chips, are pursuing a similar path.

That such big performance increases can be extracted from relatively simple changes like numbers or switching programming languages might seem surprising.

But it reflects the breakneck speed with which LLMS have been developed. For many years they were research projects, and simply getting them to work well was more important than making them elegant.

Only recently have they graduated to commercial mass-market products. Most experts think there remains plenty of room for improvement.

As Chris Manning, a computer scientist at Stanford University, put it. '' There's absolutely no reason to believe....... that this is the ultimate neural architecture, and we will never find anything better.''

The Honour and Serving of the Latest Global Operational Research on Artificial Intelligence, its Parameters and Computer Power, continues. The World Students Society thanks The Economist.

With respectful dedication to The Global Founder Framers of The World Students Society - the exclusive ownership of every student in the world, and then Mankind, Students, Professors and Teachers.

See You all prepare and register for Great Global Elections on !WOW! - for every subject in the world : wssciw.blogspot.com and Twitter X !E-WOW! - The Ecosystem 2011 :

Good Night and God Bless

SAM Daily Times - the Voice of the Voiceless

0 comments:

Post a Comment

Grace A Comment!