Project for Existential Risk

Post questions or suggestions here.
Post Reply
User avatar
Diebert van Rhijn
Posts: 6050
Joined: Fri Jun 03, 2005 4:43 pm

Project for Existential Risk

Post by Diebert van Rhijn » Mon Nov 26, 2012 9:51 pm

The Cambridge Project for Existential Risk
Many scientists are concerned that developments in human technology may soon pose new, extinction-level risks to our species as a whole. Such dangers have been suggested from progress in AI, from developments in biotechnology and artificial life, from nanotechnology, and from possible extreme effects of anthropogenic climate change. The seriousness of these risks is difficult to assess, but that in itself seems a cause for concern, given how much is at stake.

The Cambridge Project for Existential Risk — a joint initiative between a philosopher, a scientist, and a software entrepreneur — begins with the conviction that these issues require a great deal more scientific investigation than they presently receive. Our aim is to establish within the University of Cambridge a multidisciplinary research centre dedicated to the study and mitigation of risks of this kind. We are convinced that there is nowhere on the planet better suited to house such a centre. Our goal is to steer a small fraction of Cambridge's great intellectual resources, and of the reputation built on its past and present scientific pre-eminence, to the task of ensuring that our own species has a long-term future.
Some questions could arise here: what is exactly "our species" and why would it need to be ensured a "future" in the exact form as it is appearing now? Nobody ensured the existence of our predecessors, lucky for us. Also one could question if existential risks should be evaluated at all by scientists and academics. Haven't many of them after all, more than anyone else, carved out an existence in their glass towers high up there? Isn't this particular group just engaging like everyone else in the modern religion of "envisioning our possible endings but never end anything"?

Pye
Posts: 1065
Joined: Tue Jan 17, 2006 1:45 pm

Re: Project for Existential Risk

Post by Pye » Tue Nov 27, 2012 12:44 am

Setting aside the issue of the academy and its efforts toward relevance, I’d be interested to know how the authors of this proposal make links between technology and extinction. Every semester at one particular point in one particular course, some student or another will reply to the topic at hand by telling me about a sociology professor on campus who is outrightly teaching the idea that humans have ceased to evolve. Every time it comes up, I query the students about how they understood it, and I usually get back something about physical evolution (I’m not sure what is meant by the distinction ‘physical’) being slowed, stopped, by excessive medical advances, prostheses, and all other such conveniences that prevent the body from experiencing illness, injury and death in the manner necessary for genetic response. I don’t get far enough to find out if this sociology professor means to leave out “mental” evolution, or wtf he would mean by it anyway.

I don’t mean to put you in the position of trying to defend this idea, Diebert, but where do you think the aforementioned study is heading when it speaks of “extinction”? Is it like, whatever doesn’t make me stronger will kill me?

Bobo
Posts: 517
Joined: Tue Nov 16, 2010 1:35 pm

Re: Project for Existential Risk

Post by Bobo » Tue Nov 27, 2012 11:45 am

Diebert van Rhijn wrote:Some questions could arise here: what is exactly "our species" and why would it need to be ensured a "future" in the exact form as it is appearing now? Nobody ensured the existence of our predecessors, lucky for us. Also one could question if existential risks should be evaluated at all by scientists and academics. Also one could question if existential risks should be evaluated at all by scientists and academics. Haven't many of them after all, more than anyone else, carved out an existence in their glass towers high up there? Isn't this particular group just engaging like everyone else in the modern religion of "envisioning our possible endings but never end anything"?
This comes from the end of the WW2, where ecomical and political powers, also technological and territorial, managed the "existential risks". Nuclear technology being a prior example. With the fall of USSR and recently the fall of neoliberalism, and with the technological rise, the management of the "existential risks" of technology and environment, becomes a task for "humanity", for a lack of supporting powers.

User avatar
Kunga
Posts: 2333
Joined: Wed Dec 06, 2006 4:04 am
Contact:

Re: Project for Existential Risk

Post by Kunga » Tue Nov 27, 2012 12:25 pm

"Many scientists are concerned that developments in human technology may soon pose new, extinction-level risks to our species as a whole......"


Duh.....why isn't it obvious that to reverse this technological problem, is to eliminate the technology that is causing it in the first place ? We'd have to go back to living without technology ! But nooooooooooooooooo, that won't happen because nobody wants to give it up !

The only humans that will survive, are those that can survive without it. And they are the civilizations that civilization considers uncivilized ! Stone-Age civilizations have survived the longest of all civilizations on Earth.
Why ? Simply, because they are not the destroyers of Earth, like modern man.

"The meek shall inherit the Earth. "


http://www.australiangeographic.com.au/ ... -Earth.htm

User avatar
Diebert van Rhijn
Posts: 6050
Joined: Fri Jun 03, 2005 4:43 pm

Re: Project for Existential Risk

Post by Diebert van Rhijn » Tue Nov 27, 2012 9:51 pm

Pye wrote:I don’t mean to put you in the position of trying to defend this idea, Diebert, but where do you think the aforementioned study is heading when it speaks of “extinction”? Is it like, whatever doesn’t make me stronger will kill me?
The idea of (rather sudden) extinction events in itself is rather old, ranging from the religious concept of the apocalypse to the worries of alien invasion or meteor strikes (the heavens falling on us). There also have been organized responses like the of Near-Earth Objects programs which actually monitors but will not able to prevent much to my knowing. Not to mention the fears for a nuclear wasteland which as a sword of Damocles has been hanging over us since the 1960's. And speaking of which, where did it go, did we stop fearing or internalized it even further - the rockets are still there, and the buttons. Could it be it has sunk far into our own depths but now raising its ugly head again as "existential worries" of "new technologoy" in the most general sense?

But if I would defend the study I'd say that it's about preventable dangers, prevention through description since as the article mentioned, the problem lies already in the difficulty to assess the danger, to quantify it and as such perhaps to ritualistically sooth the fear already. This logic is also seen with the global warming dilemma: even if we might not understand the precise mechanics and the models being always uncertain, how much risk are we willing to take if we could prevent the worst outcome the model indicates? But even here is a price tag and nobody wants to pay the full price at this stage. What kind of calculation is happening here? Are we sitting around a poker table having to decide if we still can afford to fold? Would the future respond to a bluff?

Dennis Mahar
Posts: 4082
Joined: Thu Jul 29, 2010 9:03 pm

Re: Project for Existential Risk

Post by Dennis Mahar » Tue Nov 27, 2012 11:38 pm

Are we sitting around a poker table having to decide if we still can afford to fold? Would the future respond to a bluff?
Its always the same conversation,
or in your analogy 'game'.

It starts from 'conditions are dangerous' or 'its a dangerous world'.
the always/already projection in hand as if it were a fixedness.

assessments are drawn.
a set of options drawn.
a winning formula engaged until it breaks down and the cycle continues.
in the poker analogy the option is to get the winning hand.

Is there another conversation available?
What if the conversation that could be generated had a spiritual twang to it?
What would that look like?
What would be in it?

It can't be a conversation about God because that automatically generates conflict.
It has to be a conversation about what it means to be human.
We can't know God but we have a knowability of the human circus.
We have to stand up and look each other in the eye.

User avatar
brad walker
Posts: 300
Joined: Fri Sep 21, 2007 8:49 am
Location: be an eye

Re: Project for Existential Risk

Post by brad walker » Wed Nov 28, 2012 5:42 pm

Who here is in a position to do anything significant regarding existential risk?

Pye
Posts: 1065
Joined: Tue Jan 17, 2006 1:45 pm

Re: Project for Existential Risk

Post by Pye » Thu Nov 29, 2012 12:52 am

Lately I’ve been wondering if these things coming out of us (bio-tech, nanotech, AI, “artificial” life) will become the next form of conscious life as different from us as we are different from our proto-hominid ancestors. In other words, I’ve been wondering if this “extinction” worry is really one of evolution, and that humans will host forward these other, perhaps more persistent, resistant forms of “life” (as long as they can self-replicate) that can withstand better everything that’s killing us . . . .

Kunga, you might like a fellow named David Watson and his book/essays Against the Mega-machine. He is a flaming anarchist who believes we’ve become extraordinarily “stupid” in our inability to foster any other solution to the problems technology raises other than more technology. We’ve lost the ability to envision, even think in any other direction but this.

Well, some of us have lost it . . .

The memes of capital culture are at play here, too. We seem to think anything less than technology-forward progress would be “regression,” and this is how Watson thinks we’ve become extra-stupid, as though the only two choices are the mega-machine in which we all reside, or abject primitivism . . . .

User avatar
Diebert van Rhijn
Posts: 6050
Joined: Fri Jun 03, 2005 4:43 pm

Re: Project for Existential Risk

Post by Diebert van Rhijn » Thu Nov 29, 2012 1:44 am

brad walker wrote:Who here is in a position to do anything significant regarding existential risk?
Killing oneself reduces the risk significantly of course. But then there's the risk for some kind of Afterlife one Did Not Expect.

User avatar
Diebert van Rhijn
Posts: 6050
Joined: Fri Jun 03, 2005 4:43 pm

Re: Project for Existential Risk

Post by Diebert van Rhijn » Thu Nov 29, 2012 1:50 am

Pye wrote:Lately I’ve been wondering if these things coming out of us (bio-tech, nanotech, AI, “artificial” life) will become the next form of conscious life as different from us as we are different from our proto-hominid ancestors. In other words, I’ve been wondering if this “extinction” worry is really one of evolution, and that humans will host forward these other, perhaps more persistent, resistant forms of “life” (as long as they can self-replicate) that can withstand better everything that’s killing us . . . .
There's a possibility it already happened but the scale might be too macroscopic to be noticed. The global economy with all its wiring and interconnection could certainly be counted as a "beast" with its own priorities, which clearly isn't always the happiness of humans. Another angle was provided by the Google founders when they were talking about making their infrastructure "AI-complete" or suggesting the information now being put on-line is not for us but for a (massive) AI to read and analyze. Intelligence needs a lot of input after all, like a brain needs neurons and signals.

Dennis Mahar
Posts: 4082
Joined: Thu Jul 29, 2010 9:03 pm

Re: Project for Existential Risk

Post by Dennis Mahar » Thu Nov 29, 2012 1:54 am

'ScienceWorld'

access to money.
gravy train.

Consistency
Posts: 34
Joined: Wed Nov 14, 2012 12:43 am

Re: Project for Existential Risk

Post by Consistency » Thu Nov 29, 2012 3:19 am

Why are we using technology in the first place? Maybe to fulfill a psychological need?
professional of energy.

User avatar
brad walker
Posts: 300
Joined: Fri Sep 21, 2007 8:49 am
Location: be an eye

Re: Project for Existential Risk

Post by brad walker » Thu Nov 29, 2012 5:28 am

Diebert van Rhijn wrote:Killing oneself reduces the risk significantly of course. But then there's the risk for some kind of Afterlife one Did Not Expect.
Indeed, suicide is a temporary solution to a permanent problem.

Post Reply