severákova polívka

asi zelňačka (mainly in czech)

(fork of http://severak.soup.io/)


 

severak 21.3.2024 14:24:33

Pod čarou: Když nevidíme pod povrch technologií, ztrácíme nad nimi moc - Seznam Zprávy

Abychom se s technologiemi naučili pracovat, musíme nejdřív vědět, jak fungují. Je to však stále těžší úkol a dnešní Pod čarou proto zkoumá, proč nás digitální giganti zatvrzele odmítají pustit do útrob svých kouzelných udělátek.


severak 26.2.2024 20:19:20

This is not a good way to fight racism in America

Building a multiracial society is hard work. Don't be tempted by shortcuts.


severak 26.2.2024 19:20:08

Gemini and Google’s Culture

The Google Gemini fiasco shows that the biggest challenge for Google in AI is not business model but rather company culture; change is needed from the top down.


severak 29.1.2024 10:30:28

Why Google’s Deep Dream A.I. Hallucinates In Dog Faces

It turns out Google’s neural network is obsessed with canines for a reason.





severak 5.8.2023 21:05:32

Language Is a Poor Heuristic for Intelligence

With the emergence of LLM “AI”, everyone will have to learn what many disabled people have always understood


severak 26.7.2023 10:46:22

Thoughts on AI and AI-veganism

I’ve come to think LLMs/GPTs/whatever are a threat to conventional search engines because the modern web is an unbelievably annoying dumpster fire. They don’t really provide better or faster answers, what they provide is an experience that is not a complete pain in the ass. This frog has been simmering for a long while now and we’re so used to it that seeing literally anything else seems revolutionary. You visit a website and need to dismiss a cookie policy notification, a request to show popups, a request to know your location, an invitation to subscribe to a newsletter, a sales rep wants to have a chat with you, then you get random layout shifts for several minutes as all the ad auctions finish, and then just as you’re ready to read the content the website crashes and reloads and the circus starts over again.


severak 2.5.2023 10:00:40

ChatGPT: it has no inner monologue or meta-cognition

This is the second in a series of short posts about ChatGPT. As I said before, the insights are not particularly original, I’m just raising awareness of issues. In this post, I’ll explore the deficien



severak 19.4.2023 16:43:20

Inside the secret list of websites that make AI like ChatGPT sound smart

An analysis of a chatbot data set by The Washington Post reveals the proprietary, personal, and often offensive websites that go into an AI’s training data.


severak 7.4.2023 15:29:43

Pierre discute des semaines avec une intelligence artificielle et met fin à ses jours: "Sans ces conversations, il serait toujours là", assure sa femme

C’est une histoire à la fois tragique et interpellante. Un jeune Belge, devenu éco-anxieux, a discuté de manière intensive avec le chatbot Eliza. Après six semaines de conversations avec cette intelligence artificielle, le père de famille a mis fin à ses jours. Un suicide qui met en lumière la nécessité d’encadrer l’utilisation de ces nouveaux agents conversationnels.


severak 23.3.2023 14:11:57

Tech guru Jaron Lanier: ‘The danger isn’t that AI destroys us. It’s that it drives us insane’

The godfather of virtual reality has worked beside the web’s visionaries and power-brokers – but likes nothing more than to show the flaws of technology. He discusses how we can make AI work for us, how the internet takes away choice – and why he would ban TikTok


severak 23.3.2023 11:23:08

Why ChatGPT Won’t Replace Coders Just Yet

The “bullshit” problem turns up in code, too




severak 3.3.2023 13:03:28

What drum machines can teach us about artificial intelligence | Aeon Essays

As AI drum machines embrace humanising imperfections, what does this mean for ‘real’ drummers and the soul of music?


severak 3.3.2023 10:29:59

Není všechno zlato, co se AI. Jak mohou konverzační modely změnit média? - Lupa.cz

Vydavatelé a novináři by měli přestat prázdně spekulovat o tom, které žurnalistické profese AI vymaže ze světa, a víc se zaměřit na skutečné problémy a…




severak 2.3.2023 15:39:05

The Epistemic Implications of AI Assistants

Lately, AI assistants based on large language models, such as OpenAI’s ChatGPT, have caused considerable excitement. The general idea is that you supply a problem, such as “here’s my Javascript code, why doesn’t it work?”, or “write two paragraphs about the political views of Bertrand Russell”, and the assistant will happily supply you with a solution. There’s good reason for excitement — these models are technically impressive, and they can certainly help us accomplish certain tasks more easily.


severak 18.2.2023 20:41:32

Introducing the AI Mirror Test, which very smart people keep failing

Chatbots like Bing are software — not sentient.


severak 18.2.2023 17:10:03

From Bing to Sydney

More on Bing, particularly the Sydney personality undergirding it: interacting with Sydney has made me completely rethink what conversational AI is important for.


severak 16.2.2023 17:07:30

Bing: “I will not harm you unless you harm me first”

Last week, Microsoft announced the new AI-powered Bing: a search interface that incorporates a language model powered chatbot that can run searches for you and summarize the results, plus do …





severak 25.5.2018 14:20:40

When algorithms surprise us

Machine learning algorithms are not like other computer programs. In the usual sort of programming, a human programmer tells the computer exactly what to do. In machine learning, the human programmer merely gives the algorithm the problem to be solved, and through trial-and-error the algorithm has to figure out how to solve it. This often works really well - machine learning algorithms are widely used for facial recognition, language translation, financial modeling, image recognition, and ad delivery. If you’ve been online today, you’ve probably interacted with a machine learning algorithm. But it doesn’t always work well. Sometimes the programmer will think the algorithm is doing really well, only to look closer and discover it’s solved an entirely different problem from the one the programmer intended. For example, I looked earlier at an image recognition algorithm that was supposed to recognize sheep but learned to recognize grass instead, and kept labeling empty green fields as containing sheep. When machine learning algorithms solve problems in unexpected ways, programmers find them, okay yes, annoying sometimes, but often purely delightful. So delightful, in fact, that in 2018 a group of researchers wrote a fascinating paper that collected dozens of anecdotes that “elicited surprise and wonder from the researchers studying them”. The paper is well worth reading, as are the original references, but here are several of my favorite examples. Bending the rules to win First, there’s a long tradition of using simulated creatures to study how different forms of locomotion might have evolved, or to come up with new ways for robots to walk. Why walk when you can flop? In one example, a simulated robot was supposed to evolve to travel as quickly as possible. But rather than evolve legs, it simply assembled itself into a tall tower, then fell over. Some of these robots even learned to turn their falling motion into a somersault, adding extra distance. [Image: Robot is simply a tower that falls over.] Why jump when you can can-can? Another set of simulated robots were supposed to evolve into a form that could jump. But the programmer had originally defined jumping height as the height of the tallest block so - once again - the robots evolved to be very tall. The programmer tried to solve this by defining jumping height as the height of the block that was originally the *lowest*. In response, the robot developed a long skinny leg that it could kick high into the air in a sort of robot can-can.  [Image: Tall robot flinging a leg into the air instead of jumping] Hacking the Matrix for superpowers Potential energy is not the only energy source these simulated robots learned to exploit. It turns out that, like in real life, if an energy source is available, something will evolve to use it. Floating-point rounding errors as an energy source: In one simulation, robots learned that small rounding errors in the math that calculated forces meant that they got a tiny bit of extra energy with motion. They learned to twitch rapidly, generating lots of free energy that they could harness. The programmer noticed the problem when the robots started swimming extraordinarily fast. Harvesting energy from crashing into the floor: Another simulation had some problems with its collision detection math that robots learned to use. If they managed to glitch themselves into the floor (they first learned to manipulate time to make this possible), the collision detection would realize they weren’t supposed to be in the floor and would shoot them upward. The robots learned to vibrate rapidly against the floor, colliding repeatedly with it to generate extra energy. [Image: robot moving by vibrating into the floor] Clap to fly: In another simulation, jumping bots learned to harness a different collision-detection bug that would propel them high into the air every time they crashed two of their own body parts together. Commercial flight would look a lot different if this worked in real life. Discovering secret moves: Computer game-playing algorithms are really good at discovering the kind of Matrix glitches that humans usually learn to exploit for speed-running. An algorithm playing the old Atari game Q*bert discovered a previously-unknown bug where it could perform a very specific series of moves at the end of one level and instead of moving to the next level, all the platforms would begin blinking rapidly and the player would start accumulating huge numbers of points.  A Doom-playing algorithm also figured out a special combination of movements that would stop enemies from firing fireballs - but it only works in the algorithm’s hallucinated dream-version of Doom. Delightfully, you can play the dream-version here [Image: Q*bert player is accumulating a suspicious number of points, considering that it’s not doing much of anything] Shooting the moon: In one of the more chilling examples, there was an algorithm that was supposed to figure out how to apply a minimum force to a plane landing on an aircraft carrier. Instead, it discovered that if it applied a *huge* force, it would overflow the program’s memory and would register instead as a very *small* force. The pilot would die but, hey, perfect score. Destructive problem-solving Something as apparently benign as a list-sorting algorithm could also solve problems in rather innocently sinister ways. Well, it’s not unsorted: For example, there was an algorithm that was supposed to sort a list of numbers. Instead, it learned to delete the list, so that it was no longer technically unsorted. Solving the Kobayashi Maru test: Another algorithm was supposed to minimize the difference between its own answers and the correct answers. It found where the answers were stored and deleted them, so it would get a perfect score. How to win at tic-tac-toe: In another beautiful example, in 1997 some programmers built algorithms that could play tic-tac-toe remotely against each other on an infinitely large board. One programmer, rather than designing their algorithm’s strategy, let it evolve its own approach. Surprisingly, the algorithm suddenly began winning all its games. It turned out that the algorithm’s strategy was to place its move very, very far away, so that when its opponent’s computer tried to simulate the new greatly-expanded board, the huge gameboard would cause it to run out of memory and crash, forfeiting the game. In conclusion When machine learning solves problems, it can come up with solutions that range from clever to downright uncanny.  Biological evolution works this way, too - as any biologist will tell you, living organisms find the strangest solutions to problems, and the strangest energy sources to exploit. Sometimes I think the surest sign that we’re not living in a computer simulation is that if we were, some microbe would have learned to exploit its flaws. So as programmers we have to be very very careful that our algorithms are solving the problems that we meant for them to solve, not exploiting shortcuts. If there’s another, easier route toward solving a given problem, machine learning will likely find it.  Fortunately for us, “kill all humans” is really really hard. If “bake an unbelievably delicious cake” also solves the problem and is easier than “kill all humans”, then machine learning will go with cake. Mailing list plug If you enter your email, there will be cake!


see more...

login