The neocortex has been hypothesized to be uniformly composed of general-purpose data-processing modules. What does the currently available evidence suggest about this hypothesis? Alex Zhu explores various pieces of evidence, including deep learning neural networks and predictive coding theories of brain function. [tweet]
every wiki page is a tag and every tag is just a normal wiki page
Not every wiki page is a tag! Some wiki pages are tags, which I think makes sense. Others are articles optimized to be wiki pages.
Sounds interesting and like something I might miss if true. I would be interested in examples.
This is an experiment in short-form content on LW2.0. I'll be using the comment section of this post as a repository of short, sometimes-half-baked posts that either:
I ask people not to create top-level comments here, but feel free to reply to comments like you would a FB post.
Part of the generator was "I've seen a demo of apple airpods basically working for this right now" (it's not, like, 100% silent, you have to speak at a whisper, but, it seemed fine for a room with some background noise)
It was the July 4 weekend. Grok on Twitter got some sort of upgrade.
Elon Musk: We have improved @Grok significantly.
You should notice a difference when you ask Grok questions.
Indeed we did notice big differences.
It did not go great. Then it got worse.
That does not mean low quality answers or being a bit politically biased. Nor does it mean one particular absurd quirk like we saw in Regarding South Africa, or before that the narrow instruction not to criticize particular individuals.
Here ‘got worse’ means things that involve the term ‘MechaHitler.’
Doug Borton: I did Nazi this coming.
Perhaps we should have. Three (escalating) times is enemy action.
I had very low expectations for xAI, including on these topics. But not like this.
In the wake of these events, Linda Yaccarino has stepped...
Claire Berlinski, whose usual beat is geopolitics, has produced an excellent overview of Grok's time as a white nationalist edgelord - what happened, where it might have come from, what it suggests. She's definitely done her homework on the AI safety side.
This is a cross-post from my blog; historically, I've cross-posted about a square rooth of my posts here. First two sections are likely to be familiar concepts to LessWrong readers, though I don't think I've seen their application in the third section before.
If you’re poor, debt is very bad. Shakespeare says “neither a borrower nor a lender be”, which is probably good advice when money is tight. Don’t borrow, because if circumstances don’t improve you’ll be unable to honor your commitment. And don’t lend, for the opposite reason: your poor cousin probably won’t “figure things out” this month, so you won’t fix their life, they won’t pay you back, and you’ll resent them.
If you’re rich, though, debt is great....
Yeah, I agree this is more "thing to try on the margin" than "universally correct solution." Part of why I have the whole (controversial) preamble is that I'm trying to gesture at a state of mind that, if you can get it in a group, seems pretty sweet.
“There was an old bastard named Lenin
Who did two or three million men in.
That’s a lot to have done in
But where he did one in
That old bastard Stalin did ten in.”
—Robert Conquest
The current administration’s rollup of USAID has caused an amount of outrage that surprised me, and inspired anger even in thoughtful people known for their equanimity. There have been death counters. There have been congressional recriminations. At the end of last month, the Lancet published Cavalcanti et al. which was entitled Evaluating the impact of two decades of USAID interventions and projecting the effects of defunding on mortality up to 2030: a retrospective impact evaluation and forecasting analysis.
This paper uses some regressions and modeling to predict, among other things, that almost 1.8M people are expected to...
Thanks for the thoughtful comment. I'll try to address these remarks in order. You state
Furthermore you only examine all cause mortality, whereas the study examines deaths from specific diseases.
They also use overall mortality (Web Table 10), which is what I was trying to reproduce and screenshotted. The significance figures aren't really different than those for the regressions broken down by mortality cause (Web Table 15), but the rate ratios for the all cause mortality ratios are clearly smaller in the disaggregated regressions because people die ...
Author's note: These days, my thoughts go onto my substack by default, instead of onto LessWrong. Everything I write becomes free after a week or so, but it’s only paid subscriptions that make it possible for me to write. If you find a coffee’s worth of value in this or any of my other work, please consider signing up to support me; every bill I can pay with writing is a bill I don’t have to pay by doing other stuff instead. I also accept and greatly appreciate one-time donations of any size.
You’ve probably seen that scene where someone reaches out to give a comforting hug to the poor sad abused traumatized orphan and/or battered wife character, and the poor sad abused traumatized orphan and/or battered wife...
from Alexis’s perspective, Bryce is hitting “defect” on a prisoner’s dilemma
Was surprised at this line because that scenario seemed to me clearly a Stag Hunt. On reflection, of course this varies between people.
Edit: it seemed to me from Alexis's perspective, I mean.
Written in an attempt to fulfill @Raemon's request.
AI is fascinating stuff, and modern chatbots are nothing short of miraculous. If you've been exposed to them and have a curious mind, it's likely you've tried all sorts of things with them. Writing fiction, soliciting Pokemon opinions, getting life advice, counting up the rs in "strawberry". You may have also tried talking to AIs about themselves. And then, maybe, it got weird.
I'll get into the details later, but if you've experienced the following, this post is probably for you:
Fascinating post. I believe what ultimately matters isn’t whether ChatGPT is conscious per se, but when and why people begin to attribute mental states and even consciousness to it. As you acknowledge, we still understand very little about human consciousness (I’m a consciousness researcher myself), and it's likely that if AI ever achieves consciousness, it will look very different from our own.
Perhaps what we should be focusing on is how repeated interactions with AI shape people's perceptions over time. As these systems become more embedded in our lives,...