Difference between revisions of "Main Page"

From Simia
Jump to navigation Jump to search
imported>Denny
imported>Denny
Line 1: Line 1:
 
<ask default="None yet" format="embedded" limit="2" sort="published" order="desc">[[Category:Blog post]] [[published:=+]]</ask>
 
<ask default="None yet" format="embedded" limit="2" sort="published" order="desc">[[Category:Blog post]] [[published:=+]]</ask>
 +
 +
{{#ask:[[Category:Blog post]] [[published::+]]
 +
|order=desc
 +
|sort=published
 +
|limit=2
 +
|format=embedded
 +
|default=None yet
 +
}}
  
 
__NOTOC__
 
__NOTOC__

Revision as of 18:31, 24 December 2007

<ask default="None yet" format="embedded" limit="2" sort="published" order="desc"> +"+" contains an extrinsic dash or other characters that are invalid for a date interpretation.</ask>

30 years of wikis

Today is the 30th anniversary of the launch of the first wiki by Ward Cunningham. A page that anyone could edit. Right from the browser. It was generally seen as a bad idea. What if people did bad things?

Originally with the goal to support the software development community in creating a repository of software design patterns, wikis were later used for many other goals (even an encyclopedia!), and became part of the recipe, together with blogs, fora and early social media, that was considered the Web 2.0.

Thank you, Ward, and congratulations on the first 30 years.

A wiki birthday card is being collected on Wikiindex.

Simia

My thoughts on Alignment research

Alignment research seeks to ensure that hypothetical future superintelligent AIs will be beneficial to humanity—that they are "aligned" with "our goals," that they won’t turn into Skynet or universal paperclip factories.

But these AI systems will be embedded in larger processes and organizations. And the problem is: we haven’t even figured out how to align those systems with human values.

Throughout history, companies and institutions have committed atrocious deeds—killing, poisoning, discriminating—sometimes intentionally, sometimes despite the best intentions of the individuals within them. These organizations were composed entirely of humans. There was no lack of human intelligence that could have recognized and tempered their misalignment.

Sometimes, misalignment was prevented. When it was, we might have called the people responsible heroes—or insubordinate. We might have awarded them medals, or they might have lost their lives.

Haven’t we all witnessed situations where a human, using a computer or acting within an organization, seemed unable to do the obvious right thing?

Yesterday, my flight to Philadelphia was delayed by a day. So I called the hotel I had booked to let them know I’d be arriving later.

The drama and the pain the front desk clerk went through!

“If you don’t show up today,” he told me, “your whole reservation will be canceled by the system. And we’re fully booked.”

“That’s why I’m calling. I am coming—just a day later. I’m not asking for a refund.”

“No, look, the system won’t let me just cancel one night. And I can’t create a new reservation. And if you don’t check in today, your booking will be canceled…”

And that was a minor issue. The clerk wanted to help. It is a classical case of Little Britain's "Computer says no" sketch. And yet, more and more decisions are being made algorithmically—decisions far more consequential than whether I’ll have a hotel room for the night. Decisions about mortgages and university admissions. Decisions about medical procedures. Decisions about clemency and prison terms. All handled by systems that are becoming increasingly "intelligent"—and increasingly opaque. Systems in which human oversight is diminishing, for better and for worse.

For millennia, organizations and institutions have exhibited superhuman capabilities—sometimes even superhuman intelligence. They accomplish things no individual human could achieve alone. Though we often tell history as a story of heroes and individuals, humanity’s greatest feats have been the work of institutions and societies. Even the individuals we celebrate typically succeeded because they lived in environments that provided the time, space, and resources to focus on their work.

Yet we have no reliable way of ensuring that these superhuman institutions—corporations, governments, bureaucracies—are aligned with the broader goals of humanity. We know that laissez-faire policies have allowed companies to do terrible things. We know that bureaucracies, over time, become self-serving, prioritizing their own growth over their original purpose. We know that organizations can produce outcomes directly opposed to their stated missions.

And these misalignments happen despite the fact that these organizations are made up of humans—beings with whom we are intimately familiar. If we can’t even align them, what hope do we have of aligning an alien, inhuman intelligence? Or even a superintelligence?

More troubling still: why should we accept a future in which only a handful of trillion-dollar companies—the dominant tech firms of the Western U.S.—control access to such powerful, unalignable systems? What have these corporations done to earn such an extraordinary level of trust in a technology that some fear could be catastrophic?

What am I arguing for? To stop alignment research? No, not at all. But I would love for us to shift our focus to the short- and mid-term effects of these technologies. Instead of debating whether we might have to fight Skynet, we should be considering how to prevent further concentration of wealth by 2030 and how to ensure a fairer distribution of the benefits these technologies bring to humanity. Instead of worrying about Roko’s basilisk, we should examine the impact of LLMs on employment markets—especially given the precarious state of unions and labor regulations in certain countries. Rather than fixating on hypothetical paperclip-maximizing AIs, we should focus on the real and immediate dangers of lethal autonomous weapons in warfare and terrorism.

Simia

... further results


... more about "Main Page"