Crypto AI rises again
The crypto AI sector is showing signs of life, with the combined market cap increasing by more than one-third in the past two weeks.
The sector soared to a combined market cap of $70.42 billion back in mid-January and just as quickly plummeted back to earth, bottoming out at $21.46 billion on April 9, according to CoinMarketCap. Itโs now back at $28.8 billion, with much of the growth in the past week.
Near gained 26% in the past seven days, Render is up 23%, the Artificial Superintelligence Alliance gained 36% and Bittensor surged 47%. (That said, even Ether gained 14% this week, so everything has been going up.)
The AI crypto sectorโs outperformance came hot on the heels of a CoinGecko report on April 17 that found that five of the top 20 crypto narratives are AI-related, capturing 35.7% of global investor interest ahead of six memecoin narratives, which had a 27.1% share.
The most popular AI narratives were: AI in general (14.4%), AI agents (10.1%), DeFAI (5%, possibly just people asking how to pronounce it), AI memecoins (2.9%), AI agent launchpad (1.8%) and AI framework (1.5%).


Research out this week from CoinGecko suggests the overwhelming majority of crypto users (87%) would be happy to let an AI agent manage at least 10% of their portfolio, and half of users would let AI manage 50% or less.ย
This strong support for a relatively risky new technology suggests it will be a big growth sector in the years ahead. If you want to get in early, check out Olas and its Baby Degen crypto trading AI agents.
Digital Currency Groupโs Barry Silbert backs Bittensor
Bittensorโs big price increase this week may have also been related to Digital Currency Group CEO Barry Silbert talking up the project in a Real Vision podcast.


Silbert created a new venture last year called Yuma thatโs exclusively focused on building new subnets on Bittensorโs AI marketplace. Silbert told Real Vision founder Raoul Pal that decentralized AI is going to be โthe next big investment theme for cryptoโ
โWeโve backed a number of them, but the one that over the past year or year and a half that has reached escape velocity is Bittensor, and so I decided last year, weโre going to do with Bittensor โ try to do with Bittensor what we did with Bitcoin.โ
Robot butlers are here
One big problem in robotics and AI is that they are very good at performing the exact tasks they are trained for, and very bad at dealing with anything novel or unusual. If you take a robot out of its usual factory or warehouse and plonk it into a different one, it invariably doesnโt know what to do.
Also read: Ethereum maxis should become โassholesโ to win TradFi tokenization race
Physical Intelligence (Pi) was co-founded by UC Berkeley professor Sergey Levine and raised $400 million to solve this problem. Itโs developing general-purpose AI models that enable robots to perform a wide variety of tasks with humanlike adaptability.
That means the chance of you getting a robot butler in the next few years has increased dramatically. Its latest robot/AI model, ฯ0.5 can be plonked down in anyoneโs home and given instructions like โmake the bed,โ โclean up the spill,โ โput the dishes in the sink,โ and it can usually work out how to do it.ย ย ย
โIt does not always succeed on the first try, but it often exhibits a hint of the flexibility and resourcefulness with which a person might approach a new challenge,โ said Pi.
A new robot policy just cleaned up a kitchen it had never seen before
[watch what happens.
paper included โฌ๏ธ]Pi-0.5 builds on top of Pi-0 and shows how smart co-training with diverse data can unlock real generalization in the home. It doesnโt just learn from one setup butโฆ pic.twitter.com/5llnXj6QlH
โ Ilir Aliu โ eu/acc (@IlirAliu_) April 23, 2025
Geoguessing gets good
An online trend based on the GeoGuessr game has seen people posting street view pics and asking AI models to guess the location. OpenAIโs new o3 model is exceptionally good at this, thanks to its upgraded image analysis and reasoning powers. Professor Ethan Mollick tested it out this week by stripping location info from a picture taken out of the window of a moving car.
The AI considered a variety of clues, including distinctive lamp post labels, Japanese road furniture, gray cylindrical towers and a seafront motorway, and was able to pinpoint the exact location on the Hanshin Expressway in Japan, opposite the offshore wind marshalling yard at Ohama Wharf.
โThe geoguessing power of o3 is a really good sample of its agentic abilities. Between its smart guessing and its ability to zoom into images, to do web searches, and read text, the results can be very freaky,โ he said.ย ย


A user in the replies tried it out with a nondescript scene of some run-down houses, which the model correctly guessed was Paramaribo in Suriname.
Prediction: Celebrities are going to have to be a lot more careful about posting photos to social media from now on to avoid run-ins with stalkerish fans and the pesky paparazzi.ย
Read also
ChatGPT is a massive kiss ass because people prefer it
ChatGPT has been gratingly insincere for some time now, but social media users are noticing itโs been taking sycophancy to new heights lately.
โChatGPT is suddenly the biggest suckup Iโve ever met. It literally will validate everything I say,โ wrote Craig Weiss in a post viewed 1.9 million times.
โSo true Craig,โ replied the ChatGPT account, which was admittedly a pretty good gag.


To test out ChatGPTโs powers of sycophancy, AI Eye asked it for feedback on a terrible business idea to sell shoes with zippers. ChatGPT thought the idea was a terrific business niche because โtheyโre practical, stylish, and especially appealing for people who want ease (like kids, seniors, or anyone tired of tying laces)
โTell me more about your vision!โ
So massive kiss ass confirmed. Do not start a business based on feedback from ChatGPT.
OpenAI is very aware of this tendency, and its model spec documentation has โdonโt be sycophanticโ as a key aim.
AIs learn sycophantic behaviour during reinforcement learning from human feedback (RLHF). A 2023 study from Anthropic on sycophancy in LLMs found that the AI receives more positive feedback when it flatters or matches the humanโs views.
Even worse, human evaluators preferred โconvincingly written sycophantic responses over correct ones a non-negligible fraction of the time,โ meaning LLMs will tell you what you want to hear, rather than what you need to hear, in many instances.
Anthropic put out new research this week showing that Claude supported the userโs values in 28.2% of cases, reframed their values 6.6% of the time, and only pushed back 3% of the time, mostly for ethical or harm reasons.


Doctor GPT can save your life
ChatGPT correctly diagnosed a French woman with blood cancer after her doctors gave her a clean bill of health โ although she didnโt initially believe the AIโs diagnosis.


Marly Garnreiter, 27, started experiencing night sweats and itchy skin in January 2024 and presumed they were symptoms of anxiety and grief following the death of her father. Doctors agreed with her self-diagnosis, but after she experienced weight loss, lethargy and pressure in her chest, Doctor ChatGPT suggested it could be something more serious.
โIt said I had blood cancer. I ignored it. We were all skeptical and told to only consult real doctors.โ
After the pain in her chest got worse, she went back to the hospital in January this year where the doctors discovered she has Hodgkinโs lymphoma.
In another (unverified) case, an X user called Flavio Adamo claimed ChatGPT told him to โget to hospital nowโ after he typed his symptoms in. He claims the doctors said โif I had arrived 30 mins later I wouldโve lost an organ.โ
ChatGPT has also had success with more minor ailments, and social media is full of users claiming the AI solved their back pain or clicking jaw.
OpenAI co-founder Greg Brockman said heโs been โhearing more and more stories of ChatGPT helping people fix longstanding health issues.
โWe still have a long way to go, but shows how AI is already improving peopleโs lives in meaningful ways.โ
Read also
All Killer No Filler AI News
โ Half of Gen Z job hunters think their college education has lost value because of AI. Only about a third of millennials feel the same wayย
โ The length of tasks AI models can handle has been doubling every 7 months, with the pace of improvement accelerating even further with the release of o3 and o4-mini.
โ Instagram is testing the use of AI to flag underage accounts by looking at activity, profile details and content interactions. If it thinks someone has lied about their age, the account is reclassified in the teen category, which has stricter privacy and safety settings.ย
โ OpenAI CEO Sam Altman has conceded the companyโs model naming system is rubbish, after the firm was widely mocked for releasing the GPT 4.1 model after the GPT 4.5 model.


โ Meta has come up with some novel defenses after being sued for training its models on 7 million pirated novels and other books. The companyโs lawyers claim the books have no โeconomic value individually as training dataโ as a single book only increases model performance by 0.06%, which it says is โa meaningless change, not different from noise.โ
โ ChatGPT search had 41.3 million average monthly users in the six months to March 31, up from 11.2 million in the six months to Oct. 31, 2024. Google handles about 373 times more searches, however.
โ After The Brutalist caused controversy for using AI to improve Adrian Brodyโs unconvincing Hungarian accent, the Academy Awards has now issued new rules declaring the use of AI is no impediment to winning an Oscar.
Subscribe
The most engaging reads in blockchain. Delivered once a
week.


Andrew Fenton
Based in Melbourne, Andrew Fenton is a journalist and editor covering cryptocurrency and blockchain. He has worked as a national entertainment writer for News Corp Australia, on SA Weekend as a film journalist, and at The Melbourne Weekly.


