Gemini

Discussion of anything and everything relating to chess playing software and machines.

Moderators: hgm, Rebel, chrisw

Werewolf
Posts: 1848
Joined: Thu Sep 18, 2008 10:24 pm

Re: Gemini

Post by Werewolf »

towforce wrote: Tue May 14, 2024 12:40 am Btw - Gemini can already solve linear equations of the type shown in the long ChatGPT4o video: I asked it to solve:

What is y in this equation?

4y + 7 = 15


It gave a detailed step by step response that was completely correct. I will still use a CAS (Computer Algebra System) to do maths for the time being, though.

I will, of course, give GPT-4o a try when it becomes available (but not for chess - sorry!).
The point of the video is not to solve a simple maths equation, earlier versions of ChatGPT could already do that.
The point of the video is to show the voice and visual combination, illustrated with a maths equation and friendly chat all happening at once in real time.
smatovic
Posts: 2798
Joined: Wed Mar 10, 2010 10:18 pm
Location: Hamburg, Germany
Full name: Srdja Matovic

Re: Gemini

Post by smatovic »

Werewolf wrote: Tue May 14, 2024 9:18 am [...]
The point of the video is not to solve a simple maths equation, earlier versions of ChatGPT could already do that.
The point of the video is to show the voice and visual combination, illustrated with a maths equation and friendly chat all happening at once in real time.
+1

The so called multi modal models, text, images, audio, video as input/output, this demo clearly goes into direction AGI.

--
Srdja
User avatar
towforce
Posts: 11751
Joined: Thu Mar 09, 2006 12:57 am
Location: Birmingham UK

Re: Gemini

Post by towforce »

It looks as though the LLM market is going in the wrong direction right now - and that this is possible because the market is brand new (from November 2022, when ChatGPT 3.5 was launched).

When you need legal advice, you don't go to a "generally intelligent person", you go to a solicitor: One would therefore expect specialist legal LLM services to arise. Using a generalised LLM for chess is also a bad use case.

However, it's too early to say for sure, and I might be completely wrong: it might be that in 5-10 years, whether you want to write a poem for your girlfriend, get some legal advice, create a marketing campaign for your business, program your personal robot, or get some chess coaching (or any other type of self-improvement programme), you always go to the same service - a generalist AI like Gemini or GPT.
The simple reveals itself after the complex has been exhausted.
Werewolf
Posts: 1848
Joined: Thu Sep 18, 2008 10:24 pm

Re: Gemini

Post by Werewolf »

smatovic wrote: Tue May 14, 2024 9:57 am
Werewolf wrote: Tue May 14, 2024 9:18 am [...]
The point of the video is not to solve a simple maths equation, earlier versions of ChatGPT could already do that.
The point of the video is to show the voice and visual combination, illustrated with a maths equation and friendly chat all happening at once in real time.
+1

The so called multi modal models, text, images, audio, video as input/output, this demo clearly goes into direction AGI.

--
Srdja
Yes it now seems a question of when, not if.

What do you think about this?
https://www.facebook.com/reel/1168212794103785
User avatar
towforce
Posts: 11751
Joined: Thu Mar 09, 2006 12:57 am
Location: Birmingham UK

Re: Gemini

Post by towforce »

Werewolf wrote: Tue May 14, 2024 12:07 pmWhat do you think about this?
https://www.facebook.com/reel/1168212794103785

I actually have an example!

For those of us who are not American, it's annoying that Google's NotebookLM (the LLM trains against, and learns, your own notes!) is only available in the USA. However, there's a free alternative called AnythingLLM, which you have to download, but apparently, in contrast to a lot of free AI tools, is actually:

1. easy to download and use
2. good at what it does

However, I don't share the optimism of the guy in the video:

1. Windows is overpriced crap
2. There are good free alternatives
3. Most of us continue to use Windows
The simple reveals itself after the complex has been exhausted.
smatovic
Posts: 2798
Joined: Wed Mar 10, 2010 10:18 pm
Location: Hamburg, Germany
Full name: Srdja Matovic

Re: Gemini

Post by smatovic »

Werewolf wrote: Tue May 14, 2024 12:07 pm What do you think about this?
https://www.facebook.com/reel/1168212794103785
I agree with that (who profits from regulation), and, "10x dev for free" is also one reason why the big players release open source versions of their models, wish to add that it is about the "Magnificent Seven", not only Microsoft and Google, all seven big tech players have enough market cap to enter the AI (silicon) race and grab a piece of the future market cake. When you look at past tech bubbles like video game crash from 1984, dot-com bubble from 2001, crypto-meltdown from 2022, this tech bubble now is different, cos the generative AI generates so called surplus value:

https://en.wikipedia.org/wiki/Surplus_value

a new smartphone does not generate surplus value, a bitcoin does not generate surplus value, generative AIs do generate surplus value:

Jon Stewart On The False Promises of AI | The Daily Show


(the first three minutes you can skip)

So if it is about open source vs. big players in context generative AI you have the question about the training data, the content to feed/train the AI with, on what data does open source rely, to what data have the big tech player access?

The Lc0 and Stockfish projects showed that community can throw in hardware for a community hardware grid, but the access to training data as text, images, audio video, 3D models?

And, some researcher already say that big tech is running out of human generated data sources, so they consider something like GAN and Reinforcement Learning to train generative AIs.

Interesting times for sure, it is really taking off, all those funny sci-fi movies and books with their scenarios pop up right now :)

--
Srdja
smatovic
Posts: 2798
Joined: Wed Mar 10, 2010 10:18 pm
Location: Hamburg, Germany
Full name: Srdja Matovic

Re: Gemini

Post by smatovic »

towforce wrote: Tue May 14, 2024 11:27 am [...]
or get some chess coaching (or any other type of self-improvement programme), you always go to the same service - a generalist AI like Gemini or GPT.
Consider also the demographic change, in future, if you can afford it, you get a real human as chess trainer (or as a doc, etc.) all others rely on AI ;)

--
Srdja
User avatar
towforce
Posts: 11751
Joined: Thu Mar 09, 2006 12:57 am
Location: Birmingham UK

Re: Gemini

Post by towforce »

smatovic wrote: Tue May 14, 2024 2:52 pm...in future, if you can afford it, you get a real human as chess trainer (or as a doc, etc.) all others rely on AI ;)

One constant: people always say they prefer to be served by a human than a machine - but when offered the choice of paying for a human, or getting the machine service much more cheaply, they mostly choose the machine.

Also, which doctor do you want - the human one with a 2% error rate, or the machine with a 0.1% error rate?

This video discusses a recent paper that challenges the idea that simply adding more data and larger models will lead to general AI capabilities that can solve any task. The paper investigates the performance of large vision-language models like CLIP on downstream tasks like classification and retrieval across various concepts of varying complexity.

The key findings are:

1. Performance tends to flatten out logarithmically as the number of training examples for a concept increases, rather than showing steep, continuous improvement. This suggests hitting a plateau where adding more data provides diminishing returns

2. Common concepts like "cat" are over-represented in datasets, while more specific concepts like particular species are severely under-represented. Models perform well on common concepts but struggle with rare, complex cases

3. The paper argues that to achieve high performance on difficult, underrepresented tasks, simply scaling data and model size may be inefficient. New techniques beyond just collecting more data may be needed to overcome this limitation

The author acknowledges that large tech companies with more resources may find ways to incrementally improve performance. However, the paper's evidence casts doubt on the idea that scaling alone will lead to general, human-level AI abilities anytime soon. The author suggests the need to explore different machine learning strategies beyond just data scaling.


The simple reveals itself after the complex has been exhausted.
smatovic
Posts: 2798
Joined: Wed Mar 10, 2010 10:18 pm
Location: Hamburg, Germany
Full name: Srdja Matovic

Re: Gemini

Post by smatovic »

towforce wrote: Tue May 14, 2024 3:58 pm [...]
The key findings are:
[...]
Same applies for NNs in computer chess? ;)

--
Srdja
User avatar
towforce
Posts: 11751
Joined: Thu Mar 09, 2006 12:57 am
Location: Birmingham UK

Re: Gemini

Post by towforce »

smatovic wrote: Tue May 14, 2024 4:20 pm
towforce wrote: Tue May 14, 2024 3:58 pm [...]
The key findings are:
[...]
Same applies for NNs in computer chess? ;)

--
Srdja

Yes - but kudos and gratitude to the Modern Chess Programmers who are optimising everything - which likely includes NN size (speed v knowledge). :)
The simple reveals itself after the complex has been exhausted.