Thursday, June 23, 2011
A new camera sensor design from Lytro captures light in such a way that the focus can be changed in post. Check out the demonstration images at its homepage, and the CEO's dissertation on how it works:
My proposed solution to the focus problem exploits the abundance of digital image sensor resolution to sample each individual ray of light that contributes to the final image. ... To record the light field inside the camera, digital light field photography uses a microlens array in front of the photosensor. Each microlens covers a small array of photosensor pixels. The microlens separates the light that strikes it into a tiny image on this array, forming a miniature picture of the incident lighting. This samples the light field inside the camera in a single photographic exposure. ... To process final photographs from the recorded light field, digital light field photography uses ray-tracing techniques. The idea is to imagine a camera conigured as desired, and trace the recorded light rays through its optics to its imaging plane. Summing the light rays in this imaginary image produces the desired photograph. This ray-tracing framework provides a general mechanism for handling the undesired non-convergence of rays that is central to the focus problem. What is required is imagining a camera in which the rays converge as desired in order to drive the final image computation.
This sounds like a plenoptic setup, similar to one demoed by Adobe here. [Thanks, Jim!]
From Spitalfields Life, a collection of Horace Warner's 'Spitalfields Nipper' photos, of the barefoot urchins that haunted the neighbourhood around London's Spitalfields Market in 1912. I'm typing these words within spitting distance (ahem) of Spitalfields, and I'm pretty sure I recognise some of the buildings. The kids' expressions are a mix of plucky cheek, premature cynicism and desperation.
Little is known of Horace Warner and nothing is known of his relationship to the nippers. Only thirty of these pictures survive, out of two hundred and forty that he took, tantalising the viewer today as rare visions of the lost tribe of Spitalfields Nippers. They may look like paupers, and the original usage of them to accompany the annual reports of the charitable Bedford Institute, Quaker St, Spitalfields, may have been as illustrations of poverty - but that is not the sum total of these beguiling photographs, because they exist as spirited images of something much more subtle and compelling, the elusive drama of childhood itself.
The Orphan Works Project is being led by the Copyright Office of the University of Michigan Library to identify orphan works. Orphan works are books that are subject to copyright but whose copyright holders cannot be identified or contacted. Our immediate focus is on digital books held by HathiTrust, a partnership of major research institutions and libraries working to ensure that the cultural record is preserved and accessible long into the future.
This effort is funded by the HathiTrust and is part of U-M Library's ongoing efforts to understand the true copyright status of works in its collection. As part of this effort, the Library will develop policies, processes, and procedures that can be used by other HathiTrust partners to replicate a task that will ultimately require the hand-checking of millions of volumes.
Orphan Works Project
Last year, Waxy released Kind of Bloop, a chiptunes tribute to Miles Davis's Kind of Blue. He meticulously cleared all the samples on the album, and released it for $5 (backers of his Kickstarter project got it for free -- Waxy is founder of Kickstarter). One thing Waxy didn't clear was the pixellated re-creation of the iconic cover photo he commissioned. He believed and believes that it is fair use -- a transformative use with minimal taking that doesn't harm the market for the original, produced to comment on the original. Jay Maisel, the photographer who shot the original, disagreed, and sued Waxy for $150,000 per download, plus $25,000. Waxy ended up settling for $32,500, even though he believes he's in the right -- he couldn't afford to defend himself in court. He's written an excellent post on copyright, fair use, and the way that the system fails to protect the people who are supposed to get an exception to copyright:
In practice, none of this matters. If you're borrowing inspiration from any copyrighted material, even if it seems clear to you that your use is transformational, you're in danger. If your use is commercial and/or potentially objectionable, seek permission (though there's no guarantee it'll be granted) or be prepared to defend yourself in court.
Anyone can file a lawsuit and the costs of defending yourself against a claim are high, regardless of how strong your case is. Combined with vague standards, the result is a chilling effect for every independent artist hoping to build upon or reference copyrighted works.
It breaks my heart that a project I did for fun, on the side, and out of pure love and dedication to the source material ended up costing me so much -- emotionally and financially. For me, the chilling effect is palpably real. I've felt irrationally skittish about publishing almost anything since this happened. But the right to discuss the case publicly was one concession I demanded, and I felt obligated to use it. I wish more people did the same -- maybe we wouldn't all feel so alone.
Kind of Screwed
Pervious concrete is, basically, just concrete that allows water to flow through it. This has some benefits and detriments for urban environments, as explained on NPR's Science Friday. Frankly, though, it's kind of pleasant to just sit back and watch this patch of pervious concrete absorb 1500 gallons in five minutes.
I periodically get email from folks who, having read 'Accelerando', assume I am some kind of fire-breathing extropian zealot who believes in the imminence of the singularity, the uploading of the libertarians, and the rapture of the nerds. I find this mildly distressing, and so I think it's time to set the record straight and say what I really think.
Short version: Santa Claus doesn't exist.
I'm going to take it as read that you've read Vernor Vinge's essay on the coming technological singularity (1993), are familiar with Hans Moravec's concept of mind uploading, and know about Nick Bostrom's Simulation argument. If not, stop right now and read them before you continue with this piece. Otherwise you're missing out on the fertilizer in which the whole field of singularitarian SF, not to mention posthuman thought, is rooted. It's probably a good idea to also be familiar with Extropianism and to have read the posthumanism FAQ, because if you haven't you'll have missed out on the salient social point that posthumanism has a posse.
(In passing, let me add that I am not an extropian, although I've hung out on and participated in their online discussions since the early 1990s. I'm definitely not a libertarian: economic libertarianism is based on the same reductionist view of human beings as rational economic actors as 19th century classical economics — a drastic over-simplification of human behaviour. Like Communism, Libertarianism is a superficially comprehensive theory of human behaviour that is based on flawed axioms and, if acted upon, would result in either failure or a hellishly unpleasant state of post-industrial feudalism.)
But anyway ...
I can't prove that there isn't going to be a hard take-off singularity in which a human-equivalent AI rapidly bootstraps itself to de-facto god-hood. Nor can I prove that mind uploading won't work, or that we are or aren't living in a simulation. Any of these things would require me to prove the impossibility of a highly complex activity which nobody has really attempted so far.
However, I can make some guesses about their likelihood, and the prospects aren't good.
First: super-intelligent AI is unlikely because, if you pursue Vernor's program, you get there incrementally by way of human-equivalent AI, and human-equivalent AI is unlikely. The reason it's unlikely is that human intelligence is an emergent phenomenon of human physiology, and it only survived the filtering effect of evolution by enhancing human survival fitness in some way. Enhancements to primate evolutionary fitness are not much use to a machine, or to people who want to extract useful payback (in the shape of work) from a machine they spent lots of time and effort developing. We may want machines that can recognize and respond to our motivations and needs, but we're likely to leave out the annoying bits, like needing to sleep for roughly 30% of the time, being lazy or emotionally unstable, and having motivations of its own.
(This is all aside from the gigantic can of worms that is the ethical status of artificial intelligence; if we ascribe the value inherent in human existence to conscious intelligence, then before creating a conscious artificial intelligence we have to ask if we're creating an entity deserving of rights. Is it murder to shut down a software process that is in some sense "conscious"? Is it genocide to use genetic algorithms to evolve software agents towards consciousness? These are huge show-stoppers — it's possible that just as destructive research on human embryos is tightly regulated and restricted, we may find it socially desirable to restrict destructive research on borderline autonomous intelligences ... lest we inadvertently open the door to inhumane uses of human beings as well.)
We clearly want machines that perform human-like tasks. We want computers that recognize our language and motivations and can take hints, rather than requiring instructions enumerated in mind-numbingly tedious detail. But whether we want them to be conscious and volitional is another question entirely. I don't want my self-driving car to argue with me about where we want to go today. I don't want my robot housekeeper to spend all its time in front of the TV watching contact sports or music videos. And I certainly don't want to be sued for maintenance by an abandoned software development project.
Karl Schroeder suggested one interesting solution to the AI/consciousness ethical bind, which I used in my novel Rule 34. Consciousness seems to be a mechanism for recursively modeling internal states within a body. In most humans, it reflexively applies to the human being's own person: but some people who have suffered neurological damage (due to cancer or traumatic injury) project their sense of identity onto an external object. Or they are convinced that they are dead, even though they know their body is physically alive and moving around.
If the subject of consciousness is not intrinsically pinned to the conscious platform, but can be arbitrarily re-targeted, then we may want AIs that focus reflexively on the needs of the humans they are assigned to — in other words, their sense of self is focussed on us, rather than internally. They perceive our needs as being their needs, with no internal sense of self to compete with our requirements. While such an AI might accidentally jeopardize its human's well-being, it's no more likely to deliberately turn on it's external 'self' than you or I are to shoot ourselves in the head. And it's no more likely to try to bootstrap itself to a higher level of intelligence that has different motivational parameters than your right hand is likely to grow a motorcycle and go zooming off to explore the world around it without you.
Uploading ... is not obviously impossible unless you are a crude mind/body dualist. However, if it becomes plausible in the near future we can expect extensive theological arguments over it. If you thought the abortion debate was heated, wait until you have people trying to become immortal via the wire. Uploading implicitly refutes the doctrine of the existence of an immortal soul, and therefore presents a raw rebuttal to those religious doctrines that believe in a life after death. People who believe in an afterlife will go to the mattresses to maintain a belief system that tells them their dead loved ones are in heaven rather than rotting in the ground.
But even if mind uploading is possible and eventually happens, as Hans Moravec remarks, "Exploration and colonization of the universe awaits, but earth-adapted biological humans are ill-equipped to respond to the challenge. ... Imagine most of the inhabited universe has been converted to a computer network — a cyberspace — where such programs live, side by side with downloaded human minds and accompanying simulated human bodies. A human would likely fare poorly in such a cyberspace. Unlike the streamlined artificial intelligences that zip about, making discoveries and deals, reconfiguring themselves to efficiently handle the data that constitutes their interactions, a human mind would lumber about in a massively inappropriate body simulation, analogous to someone in a deep diving suit plodding along among a troupe of acrobatic dolphins. Every interaction with the data world would first have to be analogized as some recognizable quasi-physical entity ... Maintaining such fictions increases the cost of doing business, as does operating the mind machinery that reduces the physical simulations into mental abstractions in the downloaded human mind. Though a few humans may find a niche exploiting their baroque construction to produce human-flavored art, more may feel a great economic incentive to streamline their interface to the cyberspace." (Pigs in Cyberspace, 1993.)
Our form of conscious intelligence emerged from our evolutionary heritage, which in turn was shaped by our biological environment. We are not evolved for existence as disembodied intelligences, as 'brains in a vat', and we ignore E. O. Wilson's Biophilia Hypothesis at our peril; I strongly suspect that the hardest part of mind uploading won't be the mind part, but the body and its interactions with its surroundings.
Moving on to the Simulation Argument: I can't disprove that, either. And it has a deeper-than-superficial appeal, insofar as it offers a deity-free afterlife, as long as the ethical issues involved in creating ancestor simulations are ignored. (Is it an act of genocide to create a software simulation of an entire world and its inhabitants, if the conscious inhabitants are party to an act of genocide?) Leaving aside the sneaking suspicion that anyone capable of creating an ancestor simulation wouldn't be focussing their attention on any ancestors as primitive as us, it would make a good free-form framework for a postmodern high-tech religion. Unfortunately it seems to be unfalsifiable, at least by the inmates (us).
Anyway, in summary ...
This is my take on the singularity: we're not going to see a hard take-off, or a slow take-off, or any kind of AI-mediated exponential outburst. What we're going to see is increasingly solicitous machines defining our environment — machines that sense and respond to our needs "intelligently". But it will be the intelligence of the serving hand rather than the commanding brain, and we're only at risk of disaster if we harbour self-destructive impulses.
We may eventually see mind uploading, but there'll be a holy war to end holy wars before it becomes widespread: it will literally overturn religions. That would be a singular event, but beyond giving us an opportunity to run Nozick's experience machine thought experiment for real, I'm not sure we'd be able to make effective use of it — our hard-wired biophilia will keep dragging us back to the real world, or to simulations indistinguishable from it.
Finally, the simulation hypothesis builds on this and suggests that if we are already living in a cyberspatial history simulation (and not a philosopher's hedonic thought experiment) we might not be able to apprehend the underlying 'true' reality. In fact, the gap between here and there might be non-existent. Either way, we can't actually prove anything about it, unless the designers of the ancestor simulation have been kind enough to gift us with an afterlife as well.
Any way you cut these three ideas, they don't provide much in the way of referent points for building a good life, especially if they turn out to be untrue or impossible (the null hypothesis). Therefore I conclude that, while not ruling them out, it's unwise to live on the assumption that they're coming down the pipeline within my lifetime.
I'm done with computational theology: I think I need a drink!
Update: Today appears to be Steam Engine day: Robin Hanson on why he thinks a singularity is unlikely. Go read."
Tuesday, June 21, 2011
Monday, June 20, 2011
Sent to you via Google Reader2/3 of the subjects completely failed to notice a fight in the park because they were too busy paying attention to how many times the jogger in front of them touched his hat. (Via Eric Sorenson)
Sent to you via Google Reader
Eyez is a massively oversubscribed Kickstarter project to develop and ship a 200g pair of glasses with a hidden 720p video-camera, mic, and 8GB of memory. The glasses are styled to resemble Wayfarers, and can record locally or stream via Bluetooth to a mobile phone. Kickstarter supporters can pre-order for $150; they'll retail for $200 when (and if) they ship.
Our engineering team at ZionEyez is currently developing Eyez, the latest innovation in personal video recording technology. Eyez embeds a 720p HD video camera within a pair of eyeglasses designed to record live video data. The recorded data can be stored on the 8GB of flash memory within the Eyez glasses, transferred via Bluetooth or Micro USB to a computer, or wirelessly transferred to most iPhone or Android devices. After a one-time download of the "Eyez" smartphone and tablet app, users can wirelessly broadcast the video in real time to their preferred social networking website.
Eyez by ZionEyez HD Video Recording Glasses for Facebook
(via O'Reilly Radar)