Sunday, October 21, 2012

The dilemma of population ethics

What's population ethics?

A while back I read through this excellent introduction to population ethics:
http://plato.stanford.edu/entries/repugnant-conclusion/

Popluation Ethics is about how to make ethical decisions that affect a whole population.

For example, if we introduce contraception to a starving country and this raises millions out of poverty by preventing the conception of millions of children, is that good or bad?

As the above link points out, we don't have a good answer to that question.  I've been trying to answer it for several months now, and haven't gotten very far, even after cornering half a dozen philosophy grads.


One standard approach to population ethics, adding up all the utility (or happiness) of all the people in the population, suggests that the population would be better without contraception -- even though they're still miserable, there are more of them, so their combined small utility outweighs the greater happiness of the much smaller population they'd have with contraception.  In fact, by this rule, a billion absolutely miserable people would be considered better off than a million happy, healthy people.

The opposite standard approach, of using average happiness instead of sum total happiness, favors contraception, since we divide total happiness by the number of people.  So the smaller nation full of happy people is considered better off.  But this too fails -- this rule would hold that a single, utterly ecstatic person would be better than a million happy people.

Population as a single organism?


But I think I've finally come across a useful approach: model the population as a single organism.  This opens the door to a lot of metaphorical reasoning using the tools we use for ethical questions about a single person.  (Of course, as always, we still have to check whether the conclusion still makes sense for the population, but at least it gives us a place to start reasoning).

Instead, what if we consider the population as a single organism?  We don't consider a 500 pound person healthier than a person of ideal weight, nor do we amputate a leg just because you have a weak knee (thus raising the average health of your remaining limbs).  

So far, that's pretty promising -- it looks like we can avoid the huge+miserable and the tiny+ecstatic problems the standard two approaches have.  So how should we answer the contraception question?

Well, how do we normally evaluate the health and value of an organism?

  • Life expectancy
  • Degree of impairment or illness
  • Interaction with its environment
These seem relevant to our contraception question:
  • Life expectancy: Does the smaller, wealthier country with contraception have a better chance of surviving in the long term than the larger poor country?  Probably, if they don't go overboard and stop having kids entirely.
  • Impairment/illness: This seems like the best approximation to poverty and hunger.  It's better to be small and healthy than huge and sick.
  • Environment: It's easy to miss this with the standard approaches, which tend to focus on the happiness of the individual.  But we immediately see the problem of a big elephant in a small cage, or a lion in with the lambs.
My favorite aspect of this approach is that it favors moderation: populations that have good prospects of long, healthy life.  Too big creates environmental and health problems, while too small makes you more likely to get eaten or stepped on.

Drawbacks?

I like this idea, but we should probably also consider the drawbacks.  The biggest ones I can think of are that the approach could tend toward fascism, and that it could be used to excuse genocide.  

Fascism is a risk because we don't normally worry about the rights of individual cells in our bodies; the whole is what's important, not the parts.  This is a real risk, and we saw it go badly for the countries that have tried it.  In a more general sense, though, most countries espouse the notion of eminent domain and other principles that allow the state to override the desires of an individual.  And we kill or sequester individuals who pose a threat to the well being of others.  So perhaps fascism is the same risk to a population that blind hedonism is to an individual -- in some sense the organism is seeking its own good, but damage to the pieces that make up the whole is deadly in the long run.

The risk of justifying genocide is related: the idea that if your right hand offends you, cut it off for the good of the whole.  Here again I think the problem lies in theory vs. practice.  Proponents of genocide in history always claim that the targeted population is hopelessly corrupt and dangerous, which has always been untrue.  But the more abstract principle, that certain people or groups are too deadly to be left alone (whether these are invading armies or serial killers running free), seems generally accepted.

Conclusion

I never thought I'd be the kind of guy to say "What we really need here is a philosopher!"  But it's really true; real, practical problems like how to handle foreign aid and what kinds of charity to support are rooted in population ethics.  I'm not sure how well my approach will hold up in the long run, but I like that it's something that doesn't immediately lead to absurdities.

What do you think?

Friday, October 12, 2012

AT&T (and SBC) sucks

Where I live, AT&T is the only company I can get basic phone service from, and I have to have basic phone service before I can get DSL from a decent ISP like Sonic (who won't go out of their way to throttle your connection, hijack your DNS, or share your surfing habits with the government).

AT&T, of course, would rather have you spend hundreds of dollars a month on crappy internet, phone, TV and wireless service.

Pitfalls:

  • AT&T's site, once you get to the part where you're signing up for basic phone service, is incredibly slow.  Literally minutes for each page to load.  Find something else to do while you wait.  Clicking "continue" multiple times will screw things up, so click it once and wait.  Sometimes you'll get a "loading" cursor, sometimes you won't.
  • After I made it all the way to the end, I learned that the second-to-last confirmation page doesn't work with Chrome.  So I got to restart with Firefox.  Yay.
  • Of course they're going to try to upsell you every step of the way, including charging you if you DON'T want to appear in the phone book.  (But at least you can list what name you show up under).
  • Metered rate service is the cheapest option.  Under this plan you actually have to pay for local calls.  Or something.  I don't care; I'm not even going to hook a phone up to it.  And of course they're not going to make that easy to find. 
Step by step:

  • Once you've put in your address, at the next page, mouseover "Home Phone" and click "Home Phone Plans".
  • Then click "Order now" under the "Start New Service with AT&T" on the right sidebar.
You can probably figure it out from there.

Thursday, October 04, 2012

Mountain View City Council 2012 Election Forum

There are 4 slots and 6 candidates running for City Council in Mountain View, CA.  I went to a Q&A session with the candidates on 4 Oct 2012.

Overall impressions: (this is the only part of the post where I'm injecting my own opinions)

  • McAllister: "the politician" experienced political speaker, some buzzwords and self promotion
  • Inks: "the moderate". Moderately libertarian approach, government experience.
  • Neal: "the libertarian".  Strongly libertarian; limited government
  • Capriles: "the green candidate".  Said the most about environmental concerns.
  • Clark: "the economist".  Seems smart
  • Kasperzak: "the incumbent".  Current mayor.  Supports programs like plastic bag ban, narrowing El Camino Real.


Mike Kasperzak: incumbent Mayor. http://www.kasperzak.org/
Platform:

  • Affordable housing
  • Transit & parking
  • Fiscal sustainability

Jim Neal: sysadmin https://www.facebook.com/nealformountainviewcitycouncil
Platform:

  • Limited government
  • Responsiveness to residents

Chris Clark: incumbent Planning Commission member, Planning Commission http://www.electchrisclark.org/
Platform:

  • Maintain fiscal prudence
  • Transportation infrastructure

John Inks: incumbent vice-Mayor. http://www.electinks.com/
Platform:

  • Balanced budgets
  • Avoiding increased fees & taxes

John McAllister: incumbent Planning Commission member. http://johnmcalister.org/
Platform:

  • Financially strong city government
  • Strong negotiation
  • Effective transportation network


Margaret Capriles: data quality consultant at HP http://margaretcapriles.com/
Platform:

  • Integrated solutions across neighborhoods
  • Transportation infrastructure
  • Fiscal responsibility
Questions:

Dealing with traffic on N. Bayshore
  • Kasperzak: paid parking, discouraging people from driving, personal rapid transit
  • Neal: Encourage housing in N. Bayshore, increase parking
  • Clark: Improved stoplights, bike & pedestrian overpasses, personal rapid transit
  • Capriles: There's a study in progress.  Can we get to 0 cars in MV?
  • McAllister: Ask local employees. Increased access points into N. Bayshore.  MV / Google / VTA collaboration.
  • Inks: There's a study in progress.  Increased access points into N. Bayshore
High speed rail.  For or against?
  • Inks: Against.  Too politicized, focused on "bookend" cities.
  • McAllister: Support in theory.  Lots of issues in practice
  • Capriles: Conceptually good.  Devil is in the details.  Current state "has me questioning"
  • Clark: Initially interested.  It has become a mess.  Let's take advantage of the electrification funds for VTA.
  • Neal: No brainer: against.  We can't afford it.  Report just came out showing it's ridiculously too expensive.
  • Kasperzak: Voted for it, California needs it.  Shot themselves in the foot with it.  Issue has already been decided; how will we deal with it in Mountain View?  Tries to be optimistic: people complained about Boston's Big Dig, but appreciate it now.
Stevens Creek bridge connecting Shoreline Business Park & Moffett Field?
  • Kasperzak: We need a bridge.
  • Neal: Haven't looked a lot at it; I'd need to research it.
  • Clark: Agrees with Kasperzak.  We need a bridge for public safety reasons.
  • Capriles: How and where we put a bridge are important.  Need to consider environmental concerns.
  • McAllister: Could help with N. Bayshore access issues.
  • Inks: Maybe a bridge, maybe not.
How would you create a more environmentally sustainable city?
  • Inks: We're on a path to sustainability.  Bike paths, reducing auto traffic.
  • McAllister: Need to make sure we can fund things.  In recent general plan we did recycling, public transit, got input from people.
  • Capriles: We're working toward zero waste.  Can we remodel buildings instead of tearing them down?
  • Clark: MV is on the right path, need to implement general plan.  Building near transit routes, green building standards, improving transportation infrastructure.
  • Neal: Getting traffic lights and public transit right would go a long way.  Took the bus to get to the event -- took 2 hours and $8.  Recycling pickups to once a week.
  • Kasperzak: Energy upgrade Mountain View program.  Lost a great opportunity by not including housing in N. Bayshore
Plastic bag ban
  • Kasperzak: I'll probably support it when the report comes out.  We need to change our habits.
  • Neal: Opposes "police state" mandates like this.
  • Clark: Negative externalities exist and we should compensate for them.  We'll look back in 50 years and marvel at how lazy we were.  Outright ban may be unnecessary; maybe something phased in.
  • Capriles: What's good for the whole?  We need to sacrifice and suffer for our children.
  • McAllister: My business uses paper bags.  Supports the ban.
  • Inks: Uses canvas bags himself, opposes the ban.  Plastic bags don't even show up on the list of major waste projects.
Google wants to build housing E of 101, city council voted it down.  Your opinion?
  • Inks: Housing proposal was generic, not google-specific.  Housing should have been considered, got sidelined.  Ultimately planners will decide.
  • McAllister: Voted against.  Lack of services would cause a lot of travel over the freeway.
  • Capriles: Opposed.  Saw no compelling reasons.  Environmental impact was unclear.
  • Clark: Supported it while on the commission.  Goes to work daily, services weekly.  Strongly opposed that the 20-year plan didn't include housing N of 101.
  • Neal: Supports it.  Google has services on-campus, what's the deal about lack of services?  Let people live close to work.
  • Kasperzak: Supported it, wanted it as part of public transit commitments.
Affordable housing?
  • Kasperzak: Supports subsidized housing.  
  • Neal: Would work with developers.  Would avoid bureaucracy where developers get stuck in government approvals process.
  • Clark: Overall housing supply needs to increase.  We had no new rental complexes for 10 years.  Look for long-term solution.  Truly "affordable" housing is pretty tough here.
  • Capriles: Need developers, employers and citizens to work together.
  • McAllister: Has employees who need affordable housing.  Need increased density, but we don't have enough density to support affordable housing.  If the community wants it, the community should support it, perhaps through tax.
  • Inks: Affordable housing is subsidized housing.  Public survey didn't support parcel tax.  Unfair to force developers to bear the costs.
Narrowing El Camino Real to 4 lanes?  http://www.grandboulevard.net/
  • Inks: Opposes. VTA proposal didn't make sense.
  • McAllister: Opposes.  VTA proposal is nonsense.
  • Capriles: Opposes.  Where would the cars go?  Good idea in theory.
  • Clark: Opposes.  Issue seems to be off the table.  We have other approaches in progress.
  • Neal: Opposes.  It'd create a huge mess.  Existing express buses aren't getting used. Opposes efforts by county and state to consolidate things; keep Mountain View unique.
  • Kasperzak: Lone supporter.
How would you use the Shoreline Community Fund?
  • Kasperzak: Supports sharing excess property tax income with local schools.
  • Neal: Complex issue, doesn't have a personal position.  Fund created in 1969 to redevelop this area.  Send it to the citizens for a vote.
  • Clark: Wants a longer term solution.  Supported the short term stopgap measure.  Make sure funds are first used for redevelopment, also schools secondarily.
  • Capriles: Supports the schools, but also important to consider whole picture.
  • McAllister: Supports sharing with schools after other budgetary concerns are considered.
  • Inks: Send it to the voters, agrees with Neal.  
Closing statements:
  • Inks: Solid record of protecting taxpayer dollars.  Constituent support.
  • McAllister: Residents first.
  • Capriles: cited HP experience, community service accolates.  Commitment to the community.
  • Clark: Represents young workers.  
  • Neal: Cares about representing local citizens, getting issues in front of voters.
  • Kasperzak: Has experience as the incumbent.  

Thursday, August 30, 2012

Get rid of "3 more files to edit" warning in vim

The E173 "... more files to edit" error in vim drives me nuts.  To get rid of it, I added this to my .vimrc:


" Get rid of annoying 'more files to edit' error:
nno :q :qa

Saturday, August 11, 2012

Social status: the underground river


Status is a touchy subject in the West.  We spent a long time believing that nobles were fundamentally better than serfs, and nowadays most overt assertions of authority are considered gauche.  We don't bow to our president, and in most newer business settings we don't even use titles or honorifics: some companies take pride in everyone from janitor to CEO going by their first names.

But that doesn't mean that our society is an egalitarian utopia: parents are legally responsible for their kids, managers are responsible for their reports, police decide whether or not you get a ticket or go to jail.  And we expect people to be fair and judicious and limited in their exercise of authority, and to treat others as equals the rest of the time.  And overall I think that's a great way to operate.

More subtly, though, people also believe in respecting their elders, deferring to people with more experience, respecting "sweat equity", and special statuses like "I was here first".

This leads to a lot of ambiguity.  Should a rookie cop lecture a retired veteran about gun safety?  Should a new manager defer to a senior worker?  There are many stories about incautious Lieutenants trying to assert authority over salty old Sergeants.  Remember, all this has to be resolved without compromising our egalitarian ideals.
Contrast Asian languages, where you literally can't get through a sentence without expressing status!  

In Cambodian, for example, the generic word for "you" is only used in formal documents.  Otherwise it's always "older sibling", "younger sibling", "aunt/uncle", "grandma/grandpa", or a dozen other titles.

The very best leaders are good at keeping everyone pointed in the right direction without ever offending our sense of egalitarianism.  But it's a tough skill to cultivate: I've spent hours writing emails that try to balance a clear vision for how I think things can move forward, and what I need from other people, without sounding like I'm trying to order them around or subvert their own judgment.  Even in writing this article, it's hard to balance conveying my enthusiasm in a confident and engaging way, while still encouraging people to question my assertions and contribute their own insights.

Wednesday, August 08, 2012

Reflective street crystals

One of my favorite pieces of natural urban beauty are these "street rainbows" I see once in a while.



It's caused by a sandy powder on the roads.  I'm fairly sure it's a kind of retro-reflective crystal used to make white road lines shine brightly in car headlights, because as you can tell from my shadow in the photo, the effect only appears when the sun is almost directly behind my head.

Tuesday, August 07, 2012

Kingston v200 series SSDs have problems

From my research and testing, I'm going to be avoiding v200 series Kingston SSD disks.  (But I've had great success with the older v100 series disks).

Users complained about very slow write performance on the disks in the series smaller than 256GB, and Kingston released a firmware claiming to fix those issues, and also claiming that the 256GB disks are unaffected:
http://www.notebookreview.com/default.asp?newsID=6488&review=kingston+ssdnow+v200+ssd+7mm

I picked up a 256GB unit from newegg (SV200S3N/256G) for $199 and tested it today in several machines.

In my Dell T3500 workstation with an 82801JI (ICH10 Family) SATA controller, running Ubuntu Lucid (10.04) linux, I couldn't get the disk to come up at all.  I got this in dmesg:


[1129999.229709] ata3: irq_stat 0x00000040, connection status changed
[1129999.229712] ata3: SError: { DevExch }
[1129999.229719] ata3: limiting SATA link speed to 1.5 Gbps
[1129999.229722] ata3: hard resetting link
[1130001.452632] ata3: SATA link down (SStatus 1 SControl 310)
[1130001.452643] ata3: EH complete
[1130001.458368] ata3: exception Emask 0x10 SAct 0x0 SErr 0x4000000 action 0xe frozen

Then I switched to another machine running Ubuntu Precise (12.04) with a supermicro motherboard, plugging into one of the onboard SAS ports (looks like Intel ICH10 82801JI).  I tested writing to the bare block device:
$ dd if=/dev/zero of=/dev/sdb bs=8M count=128 oflag=direct

And got 115MB/sec.  On a second run adding a seek=128 flag I got only 54.5MB/sec.  A third run with seek=1024 gave me 91.2MB/sec.  So it's fairly bizarre to see such wildly varying numbers on an SSD, and this is very worrisome.  In dmesg I saw lots of messages like this:
[67334.890127] mpt2sas1: log_info(0x31120303): originator(PL), code(0x12), sub_code(0x0303)

Then I switched to a port on an LSI SAS2008 controller in the same machine, and consistently got 257MB/sec, which is excellent.

So from my brief testing, it looks like the disks may do okay if you put them on the right sata controller, but have significant and varied problems on other controllers.

Attempting to codify thought


I've been thinking lately that AI doesn't spend enough time trying to understand and emulate what it means to have a Train of Thought, or for that matter, what a Thought is.  So here's my best guess.  The next step is to try actually coding it up.

class Thought {
 public:
  Fire();
  // Rises when Fire()d, decays over time
  float activity;
  List(Association): associations;
  activate();
  randomly_activate_nearby_thoughts();
}

class Association {
  Thought other_thought;
  float association_strength;
}


And a specific type of Thought would be a SensoryMemory, a sort of leaf node that has not only associations with other Thoughts but also a certain sensation unique to that SensoryMemory: a particular smell or sound or visual feature.

Sensory input threads: a Sensor is a thread that takes in sensory input and calls Fire() on the corresponding SensoryMemorys.

Thought::Fire() {
  this.activate();
  wait_for_random_interval();
  randomly_activate_nearby_thoughts();
}


Thought::activate() {
  this.activity += approximately(1.0);
}

Thought::randomly_activate_nearby_thoughts() {
  related = choose_random_related_thought();  // Weighted by association_strength
  if ( related.active >= 1.0) {
    // Maybe the Association threads make this unnecessary?
    this.strengthen_association(related);
  }
}


Association Threads: an Associator looks at active Thoughts and makes new thoughts that tie together chains of thoughts that are all active right now, or associates active unrelated thoughts:

Associate() {
  thought = choose_random_active_thought();
  chain = recursively_find_all_connected_active_thoughts(thought);
  // Associate the new thought with the entire active chain:
  if (chain.length > 1)  create_new_thought(chain);
  else {
    // A lonely thought!  Find it a friend.
    other = choose_random_active_thought();
    add_association(thought, other);
  }
}

Decay threads reduce the activity level of thoughts over time.

Chaos threads (probabilistically) randomly activate Thoughts.

So your Train of Thoughts is the sequence of most active Thoughts over time.  What's missing?

TODO: perhaps we need some sort of global arousal level that treats pleasure and pain properly, causing us to shrink from pain and seek pleasure.  Or a notion of how much we are seeking: fatigue makes us sleepy, food increases our curiosity.

Thursday, June 21, 2012

Stop worrying about the "923-bit encryption" press release

The popular press usually gets crypto stuff wrong, but this time I can't find anybody within even a mile of the truth. Fujitsu et al released a press release a few days ago with the breathless title "Fujitsu Laboratories, NICT and Kyushu University Achieve World Record Cryptanalysis of Next-Generation Cryptography"

It's unfortunate that they chose to take this approach, because all their misleading yet juicy quotes are leading to a lot of popular press articles that mangle the truth even further, leading to the popular and mistaken impression that nobody actually knows how to keep anything safe using crypto, whereas the truth is that there are quite a few well-established and secure practices that work great and that most people just can't be bothered to follow.

Here's what's actually going on:

Pairing-based crypto is a tiny little branch of security that's interesting to researchers (it was needed for my dissertation), which is probably why it's so hard to get the story straight, and it's not in common use in any commercial product I'm aware of.

The most common thing the press gets wrong in these kinds of articles is making claims about security in terms of key lengths.  There are two main branches of modern crypto: private (symmetric) key and public (asymmetric) key.  They're usually used together, and while they both use keys, symmetric crypto uses much shorter keys than public-key crypto does.  For a secure symmetric cipher like AES, a 128-bit key is plenty long, and for a secure public key algorithm like RSA, 2048-bits is plenty long.

In this case, pairing-based crypto is a third kind of animal, and its security and key lengths don't have any bearing on either public or private key algorithms.

So the bottom line is that it's a very interesting research result, with absolutely no impact on the crypto people actually use (which is almost always what we can safely assume when the press gets breathless about some new research result).

Thursday, May 17, 2012

Understanding CPU, I/O and memory bandwidth choices (as of May 2012)


Every year or two I have to go catch up on the latest PC hardware so I can make reasonable decisions about what gear to buy.  Here's the result of my latest inquiry into high end Intel hardware.  Thanks to William over at Puget Systems for patiently explaining it.

CPUs:
  • Sandybridge is the latest intel CPU architecture.  Available in i3/i5/i7/xeon and more.  All of those CPU types were around before sandybridge, so for example, i7 does not necessarily imply sandybridge.
  • Sockets: Currently Intel mostly uses the LGA1155 and LGA 2011.  i5 and i7 use LGA1155, i7 and xeon use LGA2011.  i7 and xeon motherboards have different chipsets, so generally aren't interchangeable even though they share the LGA2011 socket.
  • There are only a few LGA2011 socket i7 chips, vs. many Xeon options.
  • "Server motherboard" typically means Xeon, and dual-socket boards are more common than single-socket (6 single vs. 27 dual mobos currently on newegg right now)
Memory:
  • i5 supports dual-channel memory (modules installed in pairs).  Sandybridge i7 and xeon support quad channel, which can double the memory bandwidth.
  • i5 and LGA1155 are limited to 32GB RAM.  i7 typically limited to 64GB RAM (although I see some 128GB MSI i7 mobos).  Dual-socket Xeon boards can do up to 768GB
  • ECC memory requires Xeon
PCI:
  • PCIe has 3 standards: 1.0, 2.0, and 3.0.  3.0 is brand new and mostly only used for video cards right now.  1.0 does 250MB/sec per lane, 2.0 does 500MB/sec per lane, 3.0 does 1GB/sec per lane.
  • Xeon has a built-in PCI controller

Saturday, May 12, 2012

Gnome Network Manager (lucid) "apply" button greyed out

My 2wire DSL modem has a DNS server in it that I can't shut off, and when we start heavily using the network, DNS lookups get really slow or fail completely.  Lame!  Sometimes I've resorted to editing my /etc/resolv.conf by hand and then doing chattr +i to keep it from changing, but that's pretty inelegant, and sucks when I want to use wireless at work.

Turns out there's a solution in the gnome network manager gui.  Right click on the icon, then edit connections... wireless... edit the appropriate network... IPv4 settings... then change to "Automatic (DHCP) addresses only".  Then you can put 8.8.8.8, google's public DNS server, in the "DNS servers" field.

Now, when I tried that just now, it worked fine except that the "apply" button was greyed out.  I tried a bunch of things, like killing nm-applet and restarting it as root, but what finally seemed to do the trick was right clicking on the icon and unchecking "Enable networking" before doing my edits.

Thursday, April 05, 2012

Can we find out if our universe is real?

Tonight I sat down with a friend and explored a question that’s been rolling around in my head for a long time now, which may have an answer or may not be in any way decidable by mankind.

I’ve written a few pages of introduction at the beginning to get people into the general ballpark, but the rest is pretty much our shorthand notes of interesting features in the space.  So don’t expect it to be a finished, comprehensible document; it’s just a bunch of vaguely clustered concepts.  We were exploring a space, and our only goal was to see if the question is worth thinking about more.  We think it is.



Can we find out if our universe is real?

Section 0: Introduction (are we completely nuts?)
The core idea here is that maybe we’re living in a simulated universe, and maybe we can detect that.  If we could, it would be the single most important scientific discovery in the history of mankind.

It’s also possible that we’re in a simulated universe but we can’t possibly detect it.  It’s a real possibility, and it’s the one people usually bring up as a reason why there’s no point in discussing it.  But unless undetectably simulated universes are the only kind of simulated universe we can imagine, then we shouldn’t stop asking questions yet.

If there’s any possibility we’re both in a simulated universe and that we could detect that, it’s certainly worth spending some time on.  How would we find out?  What would the consequences be?  Could there be bad consequences?

The positive possibilities include potentially being able to break the known laws of physics to help solve our problems, being able to negotiate with or steal parts of the “parent” universe and gaining knowledge for its own sake of the way things really are in a grand sense.  The negative possibilities include crashing our universe (if it’s a badly written computer program and we experiment with edge cases of its behavior) or getting our universe shut down (if we do something that gets negative attention from whoever’s in charge.)

Are there plausible circumstances in which we might succeed?  Well, here’s one example of how it might all play out:

Let’s hypothesize that our universe is a big brute force particle simulation running on a giant computer in our “parent” universe -- that is, there are a bunch of “gluons” represented as some sort of “bits” in the computer, and something resembling our mathematics is performed on them so that they interact according to what we perceive to be the laws of physics.  That simulation then is the true underlying nature of everything in our universe.

And let’s further hypothesize that the math they use has some of the kinds of subtle errors that our own computations do -- that digits get chopped off past some number of decimal points, like getting past the edge of a calculator screen.  These errors don’t interfere with the operation of our universe, but if you carefully measure, you can detect them as highly suspicious inconsistencies.

We run some tests in a particle accelerator, trying to create situations that magnify the subtle errors into something we can measure.  And on the 574th setup, we discover that the usual laws of physics are a little off in this really specific case.  That’s enough for a Nobel prize on its own, but in this case, if it turns out to strongly confirm our hypothesis, then we’ve just acquired the first hard evidence that there’s a whole universe behind what we can see.

It would open up a new branch of physics, one with absolutely limitless potential compared to the laws we’re used to dealing with, because this branch pulls back the curtain and gives us a peek at an underlying reality that we have never before glimpsed.


Section 0.5: Where we’re going with this

Section 1: The Problem Space, talks about the kinds of categories we can imagine simulated universes falling into.
Section 2: Goals covers the end goals we’re trying to reach.
Section 3: Crazy Side Ideas covers things that came up while we were thinking about this topic, that weren’t directly related.  (But they’re a great side-effect of thinking about this topic!)
Section 4: Implementation Sketches talks about a few simulations we tried to imagine building, to give us hints into the kinds of inconsistencies that might accompany them.
Section 5: Suspicious Things lists things we see in our own universe that might be the result of being in a simulation -- phenomena that seem unlikely results of an undirected natural process.
Section 6: Testable Hypotheses is our first pass at imagining experiments whose results might conclusively prove that we’re in a simulation.


Section 1: The problem space

What are the ways that some intelligent agent might go about simulating our reality?  We can imagine a lot of ways to do it, plus a whole class of things that we can’t possibly imagine.  But it gives us some starting places to begin our search.

It seems like most scenarios would have some common elements:
  • A “parent” universe with some sort of intelligent life that knows how to create reality simulations
  • A “creator” -- an intelligent actor with some reason for creating a simulation
  • A “simulation implementation” -- how did they do it?  A big computer with a bunch of simulated particles?  Something more efficient than that?  A bunch of simulated brains?  

Parent Universe

Here are some ideas for what the parent universe might look like:

- Just like our own in almost all respects
- Just like our own but with a lot more matter or energy
- Itself a simulation
- Similar laws in a higher dimensional space
- No similarity (we’re like a game-of-life experiment, and have no hope of understanding the parent universe)

Parent Universe timescale

How does time in the parent universe relate to our notions of time?

- In lock step with our own (a “realtime” simulation)
- Time goes massively faster in our universe (they’re watching galaxies grow)
- Time goes massively slower in our universe (we look like statues because they think so fast)
- No correlation at all
- Time is a function of resources, so it sometimes runs slower or faster.  (Does building supercomputers make our simulation run slower?)

Simulation Reasons

What kind of being would create our reality, and why?

Incomprehensible - we couldn’t possibly understand
Programmer - something like a person with a big computer
Dumb AI with lots of time - simple algorithm trying to optimize for something and building our universe as a very inefficient way of accomplishing its goal
Major Superintelligence - we can’t comprehend their brain
Accident - We’re an eddy in a cloud in some atmosphere; there’s a “parent” universe that created our big bang, but nobody did it on purpose.
End user - Somebody else wrote the simulation, and lots of other agents use it to create their own universes for their own purposes.

Simulation Reasons

Why might they have created our reality?

- Business (someone is selling the art/code/music we make, or the chemical compounds and plants and animals that evolved on our world)

- Military (threat simulation)
- Attempt to create superintelligence (Are we smarter than our creators and in situations analogous to what they face?) -- maybe it’s easy for them to create big brute-force particle simulations and they’ve never bothered to figure out how to code up an AI directly, or they’re waiting for us to do it.
- Simple computations being mapped out as an attempt to do Solomonov induction on one’s own universe (though the line between being a sim and being the basement gets blurry if our universe is a simple computation, especially given Tegmark 4 type realities.)
- The experiences of the actors in our universe are somehow terminal-utility-ful to the simulators.  (We’re living out their fantasies)
- Some other aspect of our universe is somehow terminal-utility-ful to the simulators. (Our universe is an abstract artwork of some sort hanging on a wall somewhere.)
- Psychological simulations of game-theoretically relevant situations
- Academic study of anthropology, sociology, psychology
- Historical curiosity -- see the primitive aliens!
- Education (class project)
- Blackmail Fodder (basement super villain is holding us hostage)
- Casual unintentional (it’s a screensaver)
- Causal intentional (it’s a game, or a child’s toy)

If we’re made of actual particles in a larger universe (eg., a bunch of hydrogen in a giant petri dish)

- Food (tasty nebulas)
- Chemical process (making helium)
- Accidental byproduct (the hydrogen was fuel and we’re just exhaust)
- Biological process

Simulation Implementations

What are the broad ways we can imagine trying to simulate a universe?

Basic building blocks

- Brute-force gluon (particle) simulation
- Optimized particle simulation (fancy math that saves cycles or storage)
- High level brain simulation (just a brain living in a video game, like the Truman Show)
- Experience simulation (we might be “real” people in the parent universe whose realities are being totally subverted, like the Matrix)
- Actual particles (hydrogen in a petri dish)

- Their space has more dimension(s) than ours and our universe is just a bunch of particles held in a plane by some magnets or something.
- Technology not conceivable by us (catchall category)
- Very (large) intelligent mind on drugs/dreaming/day dreaming
- Unintentional butterfly effect in another universe.

It’s also possible that our universe is a simple computation that has some platonic reality of its own and is also being simulated by many other universes (which again is not something we could detect, except in cases where one of them has a lossy/buggy implementation or meddles.)

Simulation Scopes -- what’s being actually simulated, and what’s just a facade?

- Our entire universe is a simulation
- Our Galaxy (and the rest of the universe is just fake “wallpaper”)
- Our Solar System (and the rest of the universe is just fake “wallpaper”)
- The planet Earth (and the rest of the universe is just fake “wallpaper”)
- Some group of people (you and your friends)
- Individual person (solipsism -- you’re the only one who goes on thinking when you’re alone)
- Individual experience (it’s only you, and you’ve only been around for a week)

Lazily evaluated mix of the above -- things don’t exist until we look at them.

Simulation Resource Limitations

Rather than choosing explicitly what to simulate, maybe simulation parameters are a function of available resources -- perhaps we’ll terminate when:
- we run out of CPU cycles or RAM
- after time has elapsed in the parent universe
- after some goal is met (once intelligent life forms and reaches a certain level, the simulation freezes for evaluation)



Section 2: Our Goals
- Create experiments to determine whether we’re in a simulation, and then
   . Try to contact the creator(s) and
       . make friends
       . do recon and figure out friend/foe
   . Try to root the machine and explore the parent universe without permission
   . Try to break the rules

Section 3: Crazy Side Ideas
Can any of these strategies work for us? (eg., could building a big particle simulator be easier than building an AI?)
Cosmic Background Radiation = Federation radio jammer to keep immature races from hearing signals from advanced races.


Section 4: Implementation Sketches
If we were going to simulate a universe, how would we do it?

Scenario 1: Brute-force universe-wide particle simulation on a digital computer
- Create the basic gluons classes and rules for interaction
- Store the gluons in some sort of data structure
- Run in time-slice epochs, with each gluon interacting with each other one, one at a time.  Then repeat.

Scenario 2: Efficient particle simulation
- Like scenario 1, but a clever algorithm lets big clouds of gluons act as a single unit without having to worry about each individual one.

Scenario 3: Brain particle simulation -- scan some “real” brain, then simulate all the particles making up that brain and feed it fake stimulus.

Scenario 4: AI simulation of all humans -- a superintelligent being “pretends” to be all of us and our surroundings.



Section 5: Suspicious Things
Does our reality seem like the kind of thing that might be easy to simulate, or annoyingly difficult?

- Our universe appears to be discretized

- Humanity hasn’t stone-aged itself yet

- Drake’s equation: why do we appear to be alone?

- Small number of elementary particles -- there are only a few simple kinds of subatomic particles.  That’s much easier to code than if, say, elements were indivisible (no such thing as atom smashing) with unique properties and there were thousands or millions of different ones.

- Aggregates of people/things behaving in ways more predictable than butterfly effects might imply.  (eg. is someone always winning at the stock market?)

- Quantum entanglement: does that seem suspiciously easy to code up, or suspiciously hard?  Are there parent universes that might make it really easy?

- Speed of light ensures that we have very limited interactions with distant objects.

- Relativity.  More suspicion because of locality.  Less suspicion because time is less linear and thus harder to keep track of than if it were universal.

- Time: we only go forward through time. State gets thrown away as time progresses (we do this all the time when we code, not bothering to save intermediate states)

- Dark Matter: Maybe someone just tweaked a parameter for how heavy our galaxy was so things wouldn’t fly away!

- Religion?  (If we’re in an Anthropology department’s simulation or a video game, then were Gods real, and actually just bored grad students from the parent universe?)

- Emptiness: why bother simulating so much useless empty space?

- Cosmology -- universe seems consistently to have come from a small single big bang -- no really weird shit in the hubble deep field image.  This hints that we’re not a tiny accidental chemical reaction on some weather pattern or plant in the parent universe.



Section 6: Testable Hypotheses


Detecting the wallpaper -- stars placed in a mathematically predictable pattern (eg., a simple pseudorandom number generator) would be a smoking gun.

Numerical errors -- if interactions between quanta were rounded down to the nearest integer, violating conservation in some extreme case
The Grand Unified Theory turns out to have its root in discrete math.

Pixels -- are positions discretized to a stationary grid in some reference frame?  Are there jaggies?  (Is true diagonal motion impossible?)

High energy experiments -- do things start behaving in suspicious ways at high energies (gluon position = NaN)

Rounding errors: eg. do a quantum experiment that is supposed to turn out a certain way 1/billion times and the rare way never happens.

Quantum entanglement stops working with sufficiently large clusters of particles.

Super computers or particle accelerators above a certain size always break.

Satellites sent out of the solar system start sending signals from slightly wrong origins.
- Satellites smash into “the sky” :)
Weird things happen if you actually try to compress too much matter into the same space on earth.

Bayes: Can we use human history as a training/verification set on hypotheses?  That is, how would this document have looked in the 1800’s, and what would that have led us to believe when we successfully went to the moon, discovered gluons and left the solar system?

- Victorian hypothesis: We’re sitting around in 1880, and guess that the earth is simulated, and the cosmos is all wallpaper.  Test: go to the moon, planets, stars.  Result: yep, real moon, real planets. (Crazy time -- moon landing really was fake... because there wasn’t a moon!)
- Victorian hypothesis: We guess we’re in a big particle simulation.  Test: build big microscope, look at atoms.  Result: very suspicious -- everything’s discretized!

Sunday, March 25, 2012

Aluminum air batteries for extended trips in electric cars

I like the idea of owning an electric car, except that we frequently make trips to see family members 100 and 350 miles away.  So I've been scratching my head about ways to let an electric car do that.  (The obvious approach is to buy a Chevy Volt or a plug-in Prius).

Today I read up on aluminum-air batteries, which have some neat properties.  They have very simple chemistry from very plentiful sources: aluminum, water and carbon.  They let the aluminum oxidize using oxygen from the atmosphere, and as the aluminum anode dissolves, you get aluminum hydroxide, which can be reprocessed into pure aluminum.

The other great thing about them is their theoretical energy density of 8 kwh / kg, far higher than we see with lithium-ion batteries.

The downside is that they're single-use -- no recharging.  But for a road trip, that may not be so bad -- instead of gassing up, you trade in the depleted battery modules for fresh ones and head back on your way.  I'm imagining a standardized battery module rack in the trunk that you only fill with modules when you're planning a long trip.

The economics kinda sorta work.  This paper claims they can get about 1.3 kwh / kg of the 8kw theoretical maximum, and that they could recycle the batteries for about $1.10 / kg.  So you'd basically be paying $1 / kwh.  The Nissan Leaf takes about 34 kwh / 100 miles, so your road trip miles would cost about $0.34 / mile.  That's 2x or 3x the price of gas for a traditional car on the highway, but you're only paying it on long trips, and it saves you from having to install a gas motor + generator in your all-electric car.

The other interesting possibility is using solar thermal plants to reprocess the spent aluminum hydroxide.  It melts at 572F, which is a downright easy temperature for a field of mirrors to produce.  The great thing about that process is that the solar thermal plant doesn't have to actually generate any electricity directly -- it's just generating heat to strip off the oxygen atoms, which turns into electricity later when it's in your car.  So the plant is much simpler than a solar thermal electric plant would be.

Here's a paper that proposed that very arrangement back in 2010.

In my studies, it was sad to see a lot of the startups from the 1990's and 2000's working on aluminum-air batteries closed down.  I only found one or two companies selling metal-air batteries at all, and those are zinc-air batteries, which have a lower theoretical energy density limit.

So maybe it's time to start looking into this technology again, especially now that electric cars are hitting the market.

Monday, March 19, 2012

Disable "browser back" and "browser forward" keys in gnome

My lenovo thinkpad has keys above the left and right arrow keys, that by default are wired to my browser's "forward" and "back" buttons.  This is a horrible idea, because it means that frequently I'll accidently hit them while entering text in a textbox, and lose all my edits when the browser leaves the page.

This fixed it for Ubuntu Lucid (running gnome):

System ... Keyboard Shortcuts ... Add ... Name: Do nothing ... Command: /bin/true ... Apply ... click in the "Shortcut" column for the newly created shortcut ... press the browser back button.

It should display as "XF86Back".  Now repeat, creating another shortcut for XF86Forward.  Problem solved!