So What's in a PhD

I remember watching Dragonball-Z, where Gohan’s mom, Chi-Chi, wanted him always to get a PhD. This really hammered home of getting the importance of the credential, the PhD, to be recognized as an expert. However, since that time I’ve become somewhat of an autodidact that learns just for the sake of. However, I recently tweeted this:


The response was surprising. However, I stand by the statement. I first stumbled across this quote while reading "The Mathematical Experience.” The "80 book benchmark" shattered the final remnants of the childhood illusion that you need a PhD to become an expert, a PhD being some mystic level of achievement. In its place now stands a new belief, that becoming expert-level is not that hard. It’s a concrete milestone that anyone sufficiently motivated can achieve.

I really like this 80 book expert benchmark because it has all the classic signs of a good goal. It’s measurable, achievable, but still decently ambitious, especially if you love books. Becoming an “expert” is not that hard, especially if you don’t need the credentials. And thankfully, if you work in startups or are creating something, credentials are not that important. If you really do need credentials, you can always hire someone with the right three letter acronym.

80 books, while seemingly daunting is not that bad. An average US working citizen spends almost an hour commuting to and from work. If she decided to use that time instead of ‘gramming or texting and instead read, she’d be able to get through a decent amount of a book per year. For a book printed with normally-sized font, a reader of this blog could probably read a page per minute. This includes the appropriate highlights made in-text for subject matter retention. That means you, dear reader, could probably finish an average-sized book per week, ~360 pages or ~50 books per year. You could get a PhD in 2 years with time to space! [1]

The eighty book-mark is also great because it illustrates how little knowledge an individual needs to know to become an expert. Within startups specifically, the low barrier to becoming an expert makes investment decisions in “inexperienced” or “young” founders less risky than they actually are. I’ve already written on how young founders often found the biggest, baddest, and best companies. If you believe the thesis of this piece, then being young is less of a disadvantage because it’s so easy to get up to speed in an industry.

Expert-level specialization is very real and necessary. Even a small town library will usually have at least a few thousand books waiting to be checked out. If we only know 80 books worth of knowledge, it’s hard to imagine how you’d be able to build a multi-faceted business. Also with knowledge expanding at an exponential rate, it seems even more daunting. This is one of the reasons why being an expert or getting things done in the world still requires you to collaborate with others and/or use tools to manage knowledge.

Of course, the 80 book goal doesn't cover all the nuances of being an expert. On Twitter, others brought up several counterpoints. First, books aren’t always the best source of knowledge. I think this is certainly true. To the original goal, I would then add the caveat that, you do need to read 16000 pages—or 80 books worth of material at 200 pages per book. This is especially true in fast-growing fields such as blockchain or AI, where preprint, blogs, and Twitter. Where you choose to get the 16k pages certainly makes a difference in what you learn. The best practitioners are often the ones that aren’t teaching the subject. Their knowledge is either much more implicit, or codified in a much more free-flowing form factor such as a blog post. Take, for example, some of Vitalik’s writings on cryptocurrencies. If you’re getting into crypto, his posts will serve you much better than any book proclaiming that blockchains are the second coming of the internet.

Another common retort to “80 books" was that being an expert is mostly about creation. However, I’d say people still need some base level of knowledge to be able to be productive in a field, and as we’ve established before 80 book-length pieces of information or 16,000 pages or two years of learning seems about right to me. You’re probably familiar with the “Whartonite Seeks Code Monkey" or “I can handle the business side" meme pages. In short, they both poke fun at B-School students who don’t really understand the mechanics of product or startups. When I first read TechCrunch and watched the Social Network, I 100% asked a technical friend of mine the same questions. I didn’t have the requisite mental models on what a “startup” was to know why this was a bit silly of a request. Yet after reading blogs, working on products, and talking to folks to get the implicit domain knowledge, I now do. More generally, understanding the domains lets you know what's at the "adjacent possible", the stuff that's hard enough that no one's done it yet and not impossible. In physics this would be the difference between working on gravitational waves and working on time travel.

I look forward to getting my PhDs in bio, brains, and blockchains soon :)

---
[1] The speed at which a person reads will definitely depend on the subject matter as well. While reading Molecular Biology of the Cell, I read at approximately 15 pages per hour while taking detailed notes. At 1000+ pages, MBOC would take me ~70 hours to read it cover to cover. A normal college-level bio class probably covers half the material in the book. So I could a semester in ~40, or a normal work week. Of course, the caveat to this is that I can read 8 hours per day… Of course, I don’t, but for sufficiently motivated individuals who find the subject matter at hand interesting, you could do it. Warren does it.

Some Thoughts on "Confessions a Sociopath"

While browsing bookstores in NYC, I stumbled across a striking cover. A porcelain mask. Female. Red lipstick, with the attached popsicle stick handle. My eyes wandered down to the title in the bottom left-hand corner--“Confessions of a Sociopath”. Intriguing yet hesitant, as I don't normally read pop psych, I picked the book up. I put it down twenty pages later. I didn’t purchase it. It was a little bit too spooky for me. When you're left with a new lens with which to view your friends, colleagues, and possibly self, you’d feel the same way.

I ended up purchasing it at another bookstore later in the same day.

M. E. Thomas, a pseudonym, writes in an extremely readable transparent style. The compact volume of three hundred or so pages reads a bit like a diary, which is what a sociopath would like you to want. We want to feel like we know the other person. Yet, true to her sociopathic nature, the prose is lightweight, easy to reach, and a bit detached. Just what we’d want in a fling, to be drawn in, to imprint our own desires onto, and to be left wanting to know more. An early moment that we experience is mom and dad driving away forgetting us at the park. A moment that “normiopaths” or “empaths" would regard with fear, tears, or some other visceral reactions, M.E. takes as a chance to prove that she can live without them. M. E. reveals nothing, and with this style, she draws us into to her inner world. 

We follow M. E. as she navigates growing up in a somewhat dysfunctional household, and matures into a beautiful, intriguing, and cold young woman. Some of her experiences as a child, I think readers may be able to relate to, especially if one were an outsider or immigrant to a new community. When you come in as an outsider, there are cultural norms, language cues, body language differences, inside jokes picked up innately for some and intentionally learned by the outsiders. The difference here is that, for M. E., the language to be learned is that of emotion something we might take for granted. The only strong desire she expresses is that of power, for control over her environment and all the people around her.

We discover her how she manipulates people around them, often without them knowing, learn, as she does, that emotions play no part in her mental world, and rules that don’t advantage her can be broken. We’re often reminded of rebels, criminals, and vampires, the darker archetypes of our mythology—characters with which we are enthralled with, at least in the aspect that they have freedom from internal and societal retribution. By continually drawing on examples from literature, particularly from Steinbeck, we’re reminded of favorite characters and perhaps about people in our own lives that fit this sociopathic mold. Not only does M.E. draw from these sketches, but she also draws from brain imaging and clinical research, as well as, clinical definitions for psychiatry. This gives this extremely transparent, personal narrative the touch of scientific authority without being too drawn out.

The worlds of work and love figure heavily in this book. Sociopaths, as we learn, turn out to be tailor-made for corporate capitalism. Money, that impartial thing that so much of daily life is centered around, is a sociopathic object. It can be transformed into whatever desire that we may hold. Within jobs that require stress, acting, or even normal office politics, sociopaths are able to lie and win their way into higher and higher positions. They're better able to deal with the stress of firing or launching a new product better than we can. However, we find through personal anecdotes that a cutthroat character isn’t always as good as it seems. The same impulsive behavior become less reliable at creating long-term relationships needed for management positions. I often thought of Steve Jobs as a possible archetypal sociopathic CEO, driven by a great product, through a path of scattered emotional breakdowns.

We later turned to the subject of love, and as noted before, it’s more than tough to maintain a long-term relationship because the default position is to be what your lover wants you to be. But as we know, vulnerability, that is being your true self, or at least acting and speaking as if you don’t have anything to hide, is the key to long-term relationships.

When we look around the office, or our college campus, or even in our loved one's heads, we often wonder what is going on in the behind their eyes. In a certain way, while reading, I was reminded of the Turing Test —how can you tell whether this thing, producing some output is intelligent (and/or) conscious.  To extend the metaphor, the Turing Test is for emotions. "Do I actually know what this person is feeling at this moment?” It’s a bit frustrating. There will always be that lack of understanding that we always will face when dealing with people, just because we haven’t lived the exact same experiences as them. 

How do we know our lover’s smile is genuine? What if like a chameleon, our lover may be producing this contortion of facial muscles to provoke the response we so desire? The ends that they want may not be just to please us, but they may be planning, plotting three steps ahead, using that goodwill generated from that smile to cajole us to change the channel to whatever they wanted.

At the end of the book, we’re left with M. E as she goes about her life without a care in the world, without attachment, yet desiring of a real connection wanting kids. And struck by the normalcy of it all. These are the desires that all of us feel, our mental worlds have just happened to mold our perceptions in a slightly different arrangement. Our biological drives, along with our upbringings can really make a difference in our lives.

If you read "Confessions of a Sociopath", you will wear the sociopaths mask. For some, you may that it fits your face perfectly. You may gain answers to some pesky questions that you’ve always wondered about yourself. If not, you may be disgusted and off put, but you will certainly wonder more about the man on the train with a certain glint in his eye. What is he thinking? How does he feel—if anything?

Biocomputers

A mostly speculative post on the far-ish future of biology.

This essay’s a spiritual success to my previous post on the subject. If you’re an investor feel free to invest with that essay’s thesis in mind :) . I’d like to take a few steps forward into the future and try to reason backwards to where we are now. I began the other essay with a comparison to the mainframe era, and I’d still like to draw on the computing metaphor.

Most people identify Intel + the microprocessor as a key innovation in the whole computing revolution. The same could be said about the Apple II, which finally incorporated the microprocessor into a consumer-ready, integrated product. I won’t argue for against either for marking a new age. Either way, those technologies were unequivocally tied together, they bookended the period where the microprocessor led the way to general purpose computing for everyone.

The integrated circuit was the culmination of billions of dollars in R&D, and today the heir to that technology is the iPhone 8, which holds some $150 trillion dollars just in transistors from 1957. These devices let you do essentially anything and were the corner stones of global communications and global money. A person could live their life with just a phone.

I wonder what set of innovations might allow for the equivalent exponential jump in biology, the microprocessor for biology. What’s the equivalent of a general purpose computing device in biology, and why would we even want one? 

First, let’s look at the definition of the microprocessor according to Wikipedia.

"The microprocessor is a multipurpose, clock driven, register based, digital-integrated circuit which accepts binary data as input, processes it according to instructions stored in its memory, and provides results as output.”

If we swap out binary data for DNA, that sounds a lot like what a nucleus does. The speed and accuracy with which we can create new strands of DNA is limited right now. Biology is, of course, general purpose. The same DNA that codes humans can be used to code algae. However, most DNA is assembled for a specific purpose. The software, the ACTGs of DNA are still way too expensive to sequence. Additionally, de novo gene synthesis and assembly, or making long DNA strands from scratch, is doubly plus expensive. While we herald a $1000 human gene sequence, and soon a $100 human gene sequence, it really needs to be close to zero. While a single base pair costs $0.02 to synthesize, this also should be close to zero. 

Why do I think $0.02 is way too high? Well, think about it this way. If every line of code cost $0.02, we would not have operating systems or any of the wonderful things we depend on today. To get to truly ubiquitous DNA manipulation the cost has to be ~$0.00000, like manipulating electrons in a personal computer.

In short, a biological microprocessor, a bioprocessor for short would be able to manipulate DNA and spit out the results, or biological and chemical components of whatever we wanted at near zero costs. An integrated biocomputer would take the inputs, the single cells, the small molecules, blood drawn from individuals, other enzymes, and return new cell the right genes inserted. Attached to main bioprocessor would be other chips such as microscopes, perturbation devices, electroporation devices, incubators, bioprinters, fluid + solid handling devices (think needles and other things), as well as being connected to traditional chips.

Fundamentally, having a digital bioprocessor or some personal computer equivalent could lower the cost of creation by several orders of magnitude. The table top sets we have today for home biology are the equivalent of the HAM radio sets. So it will be some time before we have anything really cool. However, biology holds this same property of being an information science. However, like the pre-personal computer + internet era, we had to go to separate sources to gather all of our biological material. We travel to the grocery store, we go to the mall to buy creams that are synthesized by snails, we go get surgery + pay $ to look different, we go to the pet store to get pets, even the clothes on our back are made from organic materials. If we can download creams, seeds for foods to be grown, and drug treatments, we could enable biological creativity like we have in bits.

One use case that bioprocessors could dramatically influence is human drug/medical treatment. Martin Shrekli and the latest EpiPen snafus could be avoided by at home production of molecules and treatments. If the cost of treatments are zero, then how are drugs/treatment costs to be amortized? To create a blockbuster drug today costs billions, so what happens when individuals are able to “download” medicine for free? Of course, this is a moral dilemma. Orphan drug disease gene therapy treatments cost consumers $500,000 for one treatment. That seems a bit outrageous. 

Business Models for Biology

Bioprocessors should hopefully have two first-order effects on biology--decreasing the cost of production and distribution. We just have to look to software, as we’ve seen with the internet, a radical shift in the costs of distribution has, and will continue to reshape industries. 10-1000x cost reductions leads to startups. Disrupting industries. With the internet, everything either became free, had a SaaS/APIs model attached, or birthed a marketplace. Each download or use will cost some amount, like hitting an API endpoint.
  • Music -> piracy (zero cost distribution) + lower production cost = free initially, but now SaaS model, litigious for sure.
  • Movies -> high production costs, lower discovery/distribution cost =  SaaS model (Netflix)
  • Banking -> high production/integration cost = Now have an API for this. We have Stripe.
  • Housing -> high production cost, high discovery cost = Marketplace Model (Airbnb)
The same will happen with biology. The effect on food will be different than that of pharma and that’s related to the market dynamics of production, distribution, and reputation. All these elements add to transaction cost, and as we know, transaction costs govern where fat businesses are made. Sit on top of a fat pipe of transaction costs and win money for a long time. A worry people have is drug piracy. If the cost of downloading a drug effectively drops to zero, then what happens to the dollars that need to go into research.

There are a few effects of a bioprocessor and associated peripheral devices could have on drug development. The cost of research should be way lower, allowing more drugs to come on the market, however determining efficacy will still be hard, so brands or marketplaces should establish themselves.

However, free in biology isn’t necessarily bad. People don’t always need to be motivated by monetary ends (directly) to contribute, the Debian ecosystem has had ~$20 billion of work put into free software. And this isn’t just random stuff. It runs on almost any internet connected server. We depend on it for critical infrastructure. We could potentially have freely designed seeds that are pest resistant that farmers could use instead of ones controlled by the huge pharma companies.

We might have a SaaS like business model for individuals to purchase treatments (Illumina, Gene Therapy Market??? -> have the right idea). However, we’ll have to deal with data security. Medical records are worth 20x your credit card information on black market. There is no way that I would want my health information to be hacked. A more fun SaaS business might be custom designed hair product and colorizer. First, input a strand of your hair, enter the desired hairstyle and texture, and out comes a specially designed set of creams that actually changes biological hair growth from the follicles, If we actually change the follicles, then we can change the color and texture of our hair at will for longer, cheaper, and safer than we do now.

If we go to space, we’ll certainly need and want different biological tools. Space radiation can kill, just as scurvy killed people. Space radiation can also be curtailed by 4 SNPs that potentially could be free. A digital biocomputer would be a necessary tool. We’re not going to have a lot of space on those space ships and we’re going to need to bring a lot of things. The best way of compressing things is through just information.

All these of these are possible arrangements for how the bioprocessor changes production and distribution of organic materials. But we’re sadly still a way’s away.

Today: Complexity

Computer scientists severely underestimate the complexity of even single cells. These things are really, really complex to model and build, especially if you want to get to atomic scale precision. Atomic scale precision is often what you’ll need, after all polymerase is atomically precise. It manipulates several atoms into place, and we can thank evolution for that. We only have several mutations per our few billion base pairs. To do that level of simulation we need to assume Moore’s Law continues 50 years into the future (so we’ll basically need Quantum Computers to continue that trend) to simulate one cell. For a whole brain simulation, we’ll need 100 years for that. Another example of complexity is protein structure.

We’ll either need to reduce the modeling accuracy of our systems (as we’ve done with deep learning) or use biological techniques in addition to computational models. We can use bioprocessors as a model of studying, directing the evolution of cells, of creating anything we want. On our way to a glorious biologically infused future we have many roadblocks to creating components for a bioprocessor and or personal biocomputer.

A future post will speculate in detail on 1) what a bioprocessor actually looks like 2) who’s working on this stuff now and 3) what else is holding us back.

Merging Chains

Written by Niraj and Dillon

If you posit that bitcoin has a network effect (the more people that use the currency to transact), the more valuable the coin becomes. The more valuable the coin becomes, the more users you get and the higher the network effects. Additionally, if longer chain history means better security and more miners mean better security, in the long run, is there a way to increase the network effect by merging chains?

Right now, we've only got people doing forks. Forks are important, they allow for experimentation on the rule sets, however, they potentially may reduce the overall network effect of any single token. Forks are really good because they align incentives between the people who have already done work on the master chain. In the example of ETH and ETC, we argue that it's a feature that the Ethereum Foundation automatically held ETH and ETC without their permission. They could gain in economic value of another development team. The new development team wins because they get an input customer set, the set of pub-priv key pairs that already holds ETH. This is a subtle shift in incentives, we'll write more about this later...

While we're not advocating for a maximalist approach, the idea that there should only be one token ever, it just seems like there needs to be a process to merge chains just as there is the process of forking. There is an argument to be made that "Core" or the Foundations bearing the base token name centralize development resources. In BTC and ETH respectively, only 5 and 2 developers make up the majority of commits. Forking seems to have to have become a way for talented devs to work on protocols. Just look at LTC and @satoshilite.

Additionally, we see that experimentation has been a net positive for society in other areas. Allowing for experimentation and merging isn't limited to blockchains. Just to name a few:
  • Policy experimentation within a federal system of government. I.e. adoption of a precursor to the Affordable Care Act before it became national law.
  • Startups as new entrants that can be acquired or grow to be large companies.
  • Spin offs from large corporations. Standard Oil became several smaller companies and Rockefeller was richer for it.
  • Mitochondria being swallowed to become the powerhouse of the cell.

In blockchain terms, you could conceive of merge mining as extended uncle resolution. In the GHOST Protocol, the individual uncle hash power is added to the winning block's score. Uncle miner still is incentivized, they get some proportion of the block reward. Likewise, people who contribute to the "losing token" are still incentivized. When you think of merging chains, you're still incentivizing a smaller chain's absorption into the larger chain. While protocols can directly implement the necessary hard/soft forks to include the rule set change of a fork, they won't have the now differentiated userbase etc.

How to Do Merges

There are two methods for potentially doing a merge for tokens (and probably more that we haven't thought of).

The first method is pegging a token A to token B.
  1. Agree on a price/exchange rate for A:B
    1. Oracle to determine price
    2. Hash power signaling/ratio
    3. Market pricing on exchanges
  2. Hard fork both protocols to have the same block + rule set
    1. Enforce a specific block height for the rule change, include the pegged price ratio
    2. Price converges
  3. Before the rule set is implemented, people are free to trade out of token B
  4. Allow for atomic cross chain swaps
    1. Using Decred or 0x → hard code this into the rule set change
The second method involves one chain "absorbing" the value of the other. Meaning that token A remains and token B is never used again.
  1. Agree on a price/exchange ratio for A:B
    1. Oracle to determine price
    2. Hash power signaling/ratio
    3. Market pricing on exchanges
  2. Acquire buy out funds for A to purchase B
  3. Post a public address where all B tokens can be sent to
  4. Before the rule set is implemented, people are free to trade out of token B
  5. Burn the B tokens, each B token holder, will get the amount of agreed upon amount of token A proportion to how much they sent to the specified address

Roadblocks to putting this in practice.

Both of these scenarios involve a lot of coordination. Imaging trying to do a protocol merge without some kind of explicit voting mechanism other than hash power signaling induces a headache right away. The future of decentralized governance will definitely play a large part in how these things happen.

Also, as we see in centralized mergers and acquisitions, the larger company often has to purchase the shares of the smaller company at a price premium. We'll have to establish a better pricing mechanism beyond hash power and other matters. Ari Paul and Chris Buniske have been doing a lot of great work in fundamental valuations for this.
Additionally, atomic cross chain swaps are not the only potential way to transfer a token from one chain to another, using a protocol such as Polkadot or Cosmos we might allow for this sort of thing as well.

Real World Procotols That Could Benefit.

These wouldn't just have to be currency tokens, you could potentially also merge utility tokens as well. For example, looking at Sia and Filecoin. If Filecoin were to establish a dominant market cap and share position, it might behoove them to purchase the Sia network. An additional step would need to be taken. Individuals would need to, before they can acquire any of token A, transfer their files over to the new blockchain. Once this is performed, they can claim their Filecoin token.
  • Small cap token mergers
  • Prediction markets (Augur and Gnosis)
  • File storage markets (Filecoin, Sia, and Storj)
  • BTC variant mergers (BTC, LTC, BCC)

----

Look here as well

Cryptobanking: Brain Dump

I was recently having a conversation with Niraj about the upcoming Raiden release. We were having a bit of a debate about the upcoming Layer 2 protocols for off-chain transactions. The conversation centered around the risks of centralization within these networks, as the graphs are distributed, but rather decentralized. We also noticed that there seemed to be growing competition for crypto assets. Purchasing this token or that token, and with the influx of talent into the space, it seems to guarantee a future where many different things would be competing for your crypto dollars.

Competition for Crypto Dollars
  • Storage -> Filecoin, Sia, Storj
  • Computation -> Golem, Truebit, and other things?? vs something else
  • Bandwidth -> Source, or some other wifi coin.
  • Economic value (staking/tokens) -> Ethereum, you transform it to other tokens, lock it in RanDAO, or stake a Livepeer node.
  • Purchasing new tokens -> NEO, DASH etc.
  • Crypto Hedge Funds -> Prism
  • Layer 2 Protocols -> Lightning Network, Raiden, Plasma, Polkadot.

Since crypto assets are extremely liquid and can be instantaneously changed into some other digital asset, it's tempting to do so. Unless you're a day trader in the top 1%, you'll probably lose money. With all the complexity in dealing with crypto assets, a person's best bet is usually to hold, or rather HODL. In this case, individuals are hoping that the base crypto asset that they purchase appreciates in the future. HODLing is basically stashing your coins under your mattress, which many of my friends have expressed as their dominant investment strategy.

However, if we compare this to a traditional asset like cash, that can earn interest by sitting in a bank, stashing your cryptos under a mattress doesn't seem too enticing. No one's really figured out a way to earn interest by HODLing for this, of course, we’re not the only ones to have this thought.

Lending Right Now

If your intention is to earn interest in crypto, right now you can lend token on Poloniex. When the markets are volatile, you can earn up to 0.1% per day for essentially tapping a button. However, there are two downsides to this. You don’t earn that much money because of the fragmentation of order books. Fragmented order books drive down the liquidity and potentially the demand for your lent crypto assets. Which drives down any interest you could earn. Second, lending (and investing) on centralized exchanges carries high counterparty risk. A centralized exchange could be hacked and your funds stolen, the exchange could shut down leaving your assets locked up on the platform, or the exchange could invest your crypto and lost it.

Instead of a holding my token on a centralized exchange, I'd be interested in depositing my ETH or whatever token in a smart contract that would allow me to earn an interest rate, denominated in whatever token I'd deposited. All depositors can pool their funds together, which can then be lent to another party, with a loan that is administered through a separate contract. Of course, this is exactly what a traditional bank does. Lots of people deposit their money, banks lend it out at interest and split some of the money with depositors.

In crypto, there's already a movement to do this. Mining is effectively an operation that allows you to convert BTC into more BTC. Miners often finance the purchase of their equipment with other things. And by pooling your hardware assets together in a mining pool, you can turn this into less of a lottery and more of a fixed interest annuity on an emerging commodity. Decentralized mining pools potentially allow a decentralized, smart contract implementation of these things. Already we see in mining pools, those that have algorithms to optimize which alts they should be mining on at one specific time. This also carries the additional risk/annoyance of having to exchange your base token that you’d prefer HODLing into another, potentially more volatile currency to generate a return. However, we’re going to continue seeing a proliferation of protocols, and managing this will get unwieldy. 

Even more related to our crypto bank concept is a decentralized mining pool for staked tokens. You don’t have to convert your crypto into dollars to purchase hardware to earn crypto, instead, you give up the time value of your crypto assets for more cryptos in the future, and that’s okay if we’re HODLing. Rocket Pool and 1Protocol are both staked mining pools. Rocket Pool and 1Protocol both implement a token ontop. In Rocket Pool, every ETH deposit generates a one-to-one redeemable token for Ethereum. This harkens back to private currency solutions and other bearer bonds of years past. However, cryptobanks could be used for more than just for staking protocols and letting individuals margin trade.

Furthermore, lending to purchase other tokens is only a slice of activities someone might undertake with a lent crypto asset. Crypto banks could drive liquidity by lending into all these different protocols.
  • Staked mining on Ethereum
    • Things that let you earn money -> RanDAO, Swarm
  • Staked level 2 (Raiden or other things)
  • Delegated staking on protocols that share this design
    • Livepeer.
  • Exchanges to provide liquidity in general, and for margin trading.
  • Operating a 0x node
  • Augur/prediction markets
  • Oracles
  • Numeraire -> (if I really believe my model is superior), then I should go out and purchase token
    • But what if I don’t have assets??? 
  • NEO -> interest for holding token
  • Node operators -> Lightning Network/Truebit/Coinjoin-as-a-service
  • Polkadot/other scalability tokens
History Applied to Crypto

Fractional reserve serves two purposes. One role is operational. Banks make transferring money easy. Instead of moving my gold bar to Timmy, I can ask the bank to change the ledger entry. The other role is economic, providing investments to individuals. If we look at the history of banking in general reserve we went from warehousing money and not lending it to lending it for a fee and only retaining paper slips that allowed you to transfer money from another.

While the operational role of banks are greatly lessened in a crypto economy, as individuals can directly transfer tokens with other individuals using the built-in asymmetric key cryptography. However, individuals may still want to earn interest as they hold, and this is where a cryptobank might come in.

In the far, far future, when on-chain transactions are too expensive (hopefully this never happens), crypto bank contracts/DAOs may enable scaling of transactions. They could operate as layer 2 hubs, with connections to other cryptobanks and then finally to individuals through payment channels. Although we should continue to work on scaling the base protocols, banks could potentially keep the operational role of transferring money (the traditional role of banks). We’ll leave the discussion of whether or not this is a good thing, due to centralization for a separate topic.

A common bitcoin debate is if it needs to be used as a medium of exchange as well as a store of value. The argument in one sentence for bitcoin needing to be a medium of exchange is that if everyone holds bitcoin and never spends it, there will be no transactions, miners won't be incentivized to secure the network and it'll fall apart. However with cryptobanking, bitcoin HODLers who are in it for the long haul are able to lend their bitcoin to others, driving transactions, paying fees to miners, and keeping the bitcoin as a medium of exchange dream alive. 

And that’s about it! Thanks for reading.

----

Related Protocols

Some Work to be Done

Here are a few things that we don't know. If you are working on any of them, I'd be curious to learn more.

  • What consciousness is and feels like to other entities that are not ourselves
  • How the components of our brains work together to remember, generalize, learn, and act
  • How to safely augment our mental capabilities
  • How to create machines that can remember, generalize, learn, and act
    • How we can do this safely
  • How to create robots that can generalize and can act in a home
  • How to best help those suffering from mental illnesses
  • How to safely create powerful electronics that can remain inside the body indefinitely
  • How to reverse dementia, CBT, Alzheimer's, and more
  • How to reverse cancer
  • How to reverse heart disease
  • Why we keep getting fat and how to stop it
  • The best way to reverse diabetes (and other metabolic disorders)
  • The root causes of aging
    • How to measure biological age
  • How to model the large-scale systems of biology
  • How to cheaply mass produce DNA
  • We don’t know how to rearrange biological components to do useful things in a safe manner
    • We don’t know how what individual biological components do at various times
  • A cheap, safe, precise way of delivering drugs or other biologics
  • A safer, cheaper way of developing treatments
  • The best way of keeping good monopolies (those that create lots of consumer surplus early in their lifetime) and turn into rent-seeking organizations later in their lifetime
  • How to accurately model large-scale social systems 
  • We don’t know the best way to organize and make decision at a scale of 7+ billion people
  • The best way of solving the tragedy of the commons
    • Climate change
  • How to organize large-scale groups of people, capital, and knowledge in systems to create value
  • How to correctly allocate resources to those doing research
  • The best method of learning, an inefficient and difficult process and sometime unenjoyable process even for the smart and self-motivated
  • The best way to motivate people to stay happy, work on interesting problems, and to contribute to society
  • How to scale-up self-sustaining fusion reactions
  • How to scale up generalized quantum computation
  • What lies beyond our universe
  • What the fundamental nature of our reality, at the lowest levels
  • If the universe, at the lowest levels is continuous or discrete
  • How to get a lot of people off the planet onto another planet safely
  • How to get minerals and other resources from off the planet onto the planet

The Great Firewall of Facebook

Media when concentrated in a few individuals or the state has always been subject to censorship/influence whether by direct action or inaction. Western Union, China, and Napoleon are a few prominent examples. Now we have Facebook. The press lauded Facebook and Twitter when it influenced the Arab Spring, but is chafing at social media’s power now that it’s come to influence our politics in the States. 

Facebook and any social network has de-facto censorship through their use tweaking of newsfeed algorithms. Facebook is just the largest and easiest target. Of course unlike China, Facebook’s aim is not to achieve certain any certain political goal. The aim as a public company is to create long-term shareholder and user value. Therefore, any tailoring of the newsfeed algorithm will be made towards those ends. It is a bit scary that Zuck controls the majority of Facebook’s voting stock, control of the board, and further downstream of the algorithms that control our news feeds, and sheer scale of Facebook’s users. And when shareholder value seemingly comes into conflict with our user value in the most valuable walled garden in the world has a frightening influence when a country is just a medium sized audience in terms of their scale. Facebook has always been a walled garden, killing off any products (Facebook Platform, access to media, etc) that captured too much value from the all important newsfeed. Much like Facebook, the internet in China is also a walled garden. With the Chinese internet, we know the aims of censorships are to achieve political goals. 

As it’s started influence our domestic politics, there have been proposals in the tech community, within Facebook, and in the broader community to tweak the Facebook algorithm in various ways. These have been proposed solutions that I don't feel like are permanent fixes:
  1. Users proactively change feed, because people don’t shift from their default option.
  2. Tweaking the algorithm. We’ve seen with SEO and Google, this is just an arms race.
  3. We can use traditional anti-trust regulation. 
    • Iron rules of information economies, everything tends towards monopoly because of network effects and zero marginal cost of distribution
    • Smaller groups of people might become more of an echo chamber
  4. We could turn Facebook (and Twitter) into public utilities/non-profit
    • This returns to issue of who controls it. If it the government, this would always be at risk of turning into a propaganda machine.
    • If it another rich billionaire, it runs the same issues as traditional media organizations (as well as Facebook).
The core problem with Facebook, Twitter, Linkedin, is that they need to aggregate their data and users. That’s why they’ve closed off their APIs to developers. They know their graph is what makes them special. Twitter used to be open with their data, but closed it after they saw different clients like Tweetbot/Tweetdeck and like the like potential a threat to their data moat. They can’t relieve the tyranny of the algorithm

Yes, IMAP and access to emails is one example where multiple parties have access to data, and where companies can still make money.

However, the blockchain and app coins might provide a better solution to ending censorship by algorithm and still incentivizing people to create open products.. Just as we have in email, we can use multiple clients to look at our email, and as well as incentivize creators of these protocols.



Facebook Email Blockchain Social Apps
Front End One (Money made up here with ads) One (Money made up here with ads) Many
Algos One Many Many
Token/access to data None None Money made down here with increasing data/users
Data/Blockchain Have to guard this Free to share Share freely/forkable

We can tie together a token that directly read/write access to data. The token should rise in value just as Bitcoin has risen in value as more transactions, more data is added to the ledger. When we tie the business model of the token directly to data, we don't have the same problem of not allowing Facebook to share their data. 

Just as we can view our email with multiple clients, we'd be able to view our friendship graph, the stories and links they post with multiple front ends. On the front-end, it doesn’t matter which UI/UX experience the user sees, or which algorithm the user sees*. Different entities can A/B algorithms for sorting newsfeeds. We can have ones that allow users to see fake news, ones that allow users to be exposed to more long-form content, or even ones that promote argument. Cryptocurrency based social networks can end the de facto censorship that Facebook holds over what news a user sees. And some people have already built prototypes of these social networks: Squeek.io and Eth-Tweet. To me, this seems like a potential solution that aligns everyone’s incentives. It’s a way for technology to solve problems created by technology.

Facebook represents a centralized model of social networks. They’ll still remain very important. While a great utility, it also runs counter to the spirit of the open web. Perhaps blockchain social networks can return us to the open-source past of the web, while still allowing creators to satisfy their self interest.

* Gating access to the underlying data doesn’t have to mean that the average user will have to pay access to use the service as different token distribution mechanisms can be used so that top users (which will be advertisers or celebrities) will subsidize access for the average user.  

Learning from Advice

College is a pretty stressful and uncertain time. And if you’ve heard anything about how cut-throat an environment Penn is, then you know how much people worry about their futures. Am I freaking out a bit? Are my friends freaking out a bit? Yea. But this essay isn’t about commiserating that experience directly. It’s about what we do when we face these uncertain times. Usually, we look for answers on Google, in churches, older peers, parents, and even fortune cookies. We look to anyone and anything that might have pertinent advice. Yet for all the so-called advice we get, why doesn’t much of it seem to stick?

When I ask for advice, I'm in a moment of semi-crisis or with someone who I think knows more than me. Hearing a piece of seemingly insightful advice is excellent instant gratification. Regardless of the quality of advice, anything that sounds remotely confirmatory of my planned direction gives my brain a small dose of dopamine. If I have a notebook handy, I might even write it down. But after a while I forget that piece of advice and move on with my life just as before. Until, of course, I invariably do the same thing all over again—ask, feel good, forget. When I ask for advice, I actually do want to improve my live trajectory. So that’s not that a very helpful cycle.

Advice is something that’s been gained through years of experience, and that’s how it’s should be applied. Advice is a little kernel that we are supposed to carry and ingrain in our minds, a habit or behavioral change that we should make. Yet in the moment of asking for advice, in our minds, we do something that’s called substitution. Because the question of how we will feel in mid-to-long term future is so cognitively hard (impossible?) to forecast, we substitute that hard question with the easy question of how we feel the moment after we receive a piece of advice.

Knowing that we have to be aware not to continue a cycle of taking advice is a good start, but it’s also pretty general. In the end, taking advice and acting upon it is about building new habits. Since there’s already mountains of literature out there about building habits, I won’t go into it. Deciding to take advice and change a habit is also a step past where advice can trip us up. Bad advice implemented well can lead us down an American Beauty-style midlife crisis. 

In general, advice from books is probably better than from a person. Books older than 50 years old and ones with ancient wisdom are extremely helpful. They give time-tested advice with the intuition/experience behind it. And when moving on to seeking wisdom from people, make sure to keep track of how much that person knows their domain before taking their advice. When someone is older or seems to have more esteem, it's hard not to get sucked in by their halo. If the advice is delivered confidently, it's harder to discount the advice even though it's often guess work from their end. Even if they do somewhat know what they are doing, be wary of how they came across their advice. The environment in which advice is sought and experience earned matters. Wicked environments are those where individuals can learn the wrong lesson from participating. A broad example of this is the 2001 dot-com crash. A whole swath of people learned from this time that tech is a bad investment, something that we can see now is not true. This applies to investing or any highly random, low feedback environment--(finding a soulmate, landing a dream job??). 

Take advice from people whose shoes people want to be in. The future is indeterminate. In ten years, I could see myself as a startup entrepreneur, a VC, or even doing something in public policy. Therefore the cross-section of people that I'd seek advice career advice from is large. After asking and compiling advice from multiple sources, I try to discern the experience behind the advice, look for ways in which the advice breaks, make sure the incentives of people dispensing advice align with mine, and not ask for more advice before changing my own behavior. And in the case that their advice conflicts, as it often will, I will just go with my gut. I do this because I know that it probably either I'm asking for the wrong advice or that the decision point of the advice leads is inconsequential, or both. Not overanalyzing the situation can be tough when deciding whether or not to drop out of school. In the case that their own actions conflict with their own advice, it matters even less what course of action we take. Advice is just a data point as every situation is different. Being able to live with your decision is what's most important in the end (Thanks Demps).

In Defense of Young Founders

Sometime during the summer, a friend of mine questioned if young founders (let's say younger than 26) would be able to develop the biggest startups of the future. The argument was that startups of the future will trend towards hard tech. Technologies like biotech, robotics, AI, and material science each take years to build domain expertise, not to mention capital intensive. Both those form barriers for young founders to get started. Contrast this with the recent history of companies centered in information technology/internet startups. We all have the image of genius hacker developing applications as a teenager. This was (and still is) an open industry, where the tools for development are literally on everyone's desktop. With all that said, it sounds like we have to say goodbye to the garage startup. So are there any reasons for us to be optimistic about the young founder of the future? 

In the past 20 years, there have been many examples of student founders. Michael Dell, Bill Gates, Woz, and Steve Jobs all come to mind. Yet, it's hard to think of examples that stretch outside of this range, but we should fall prey to availability bias. 

A quick survey of Wikipedia shows that in each technological era, young founders have always been able to make a name for themselves. This list is highly biased towards US companies and not comprehensive by any means. However, it's no guarantee that this trend of young founders will continue just because of this past trend--just ask Nassim Taleb. Startups are a uniquely creative pursuit. They sit between, mathematics, a totally abstract pursuit, and history. In "Age and Outstanding Achievement", Simonton examines the age of peak creative/leadership output of different fields. Poetry, pure mathematics, and theoretical physics --which exhibit a peak age in one's late 20s or early 30s -- and novel writing, history, philosophy, medicine, and general scholarship -- exhibit a peak age in one's late 40s or early 50s. I think entrepreneurship skews towards the younger side, but why? Naval Ravikant and Marc Andreessen have already written two great blog posts about this, and I'll quote liberally from them here. 
"The first set comprises problems that are solved by an emotional state (poetry, painting), by loading a very difficult single framework into your head (math, physics, coding), and / or competition (driven by sex drive and time-sensitive). The latter set are more rational, are systems problems rather than point problems, and don’t have time-sensitive competition. " - Naval
Compared to internet startups.
"Modern entrepreneurship, especially web entrepreneurship, is extremely competitive / time sensitive, requires enormous amounts of iteration even withina single product life-cycle, and often requires solving many challenging technicaland business problems one after the other in a public view (with the opposite sex watching). So, it favors the young and single." - Naval
While Naval says that the young founder phenomena may be limited to the modern age, I'm making the generalization using the list built above that entrepreneurship has historically and will for the foreseeable future maintain this youthful skew. Another biological factor that may cause the youthful skew is the difference in peaks of fluid and crystallized intelligence. The young founder's combination of enthusiasm
and peak in fluid intelligence help her with identifying new markets, iterating on products, and more. Yet founders are not alone sufficient to create huge startups. Networks of other talented people, financing, production infrastructure, and the right knowledge also need to be in the mix. 

Although hard tech startups will always require fundamental knowledge to get started to iterate, knowledge is now easier to acquire than ever. Youtube videos, pirated textbooks, Reddit, and StackOverflow are just a few aggregated knowledge bases. Knowing things within a domain is now easy enough, but young entrepreneurs of today also have the advantage of seeing the non-obvious connections between different fields. arXiv and scihub.org have allowed for academic papers to be shared as soon as they are written. It's amazing to watch when implementations of DeepMind's paper is worked on by communities around the globe simultaneously. Usually in one week you can expect to see code from that paper, and in another week that code doing something as interesting as writing episodes of Friends or analyzing the genome.

Sadly not all fields enjoy the low startup costs of software and AI startups. The hard tech startup often needs lab space or large capital commitments to start building prototypes. Not to mention the speed of iteration for AI is probably some factor of 10x faster than biological experimentation or material science, because you don't have to wait for cells to reproduce (or die). Again, new innovations help are on the young founder's side. Infrastructure is now almost as easy to deploy in hard tech as it is for a developer to use AWS making the speed of iteration 10x and cost 10x less.
  • CRISPR -> 10x easier to gene edit anything "“With CRISPR, literally overnight what had been the biggest frustration of my career turned into an undergraduate side project,” says Reed, of Cornell University. “It was incredible.”
  • Desktop gene sequencing -> 10x cheaper and faster to analyze your genome
  • Cloud experimentation platforms -> 10x faster/cheaper way to run and scale. I compiled some other bio related advancements here.
  • AI applied to VR Content Dev -> 10x faster generation of scenery and characters
  • Open Source CS -> 10x more stable and useful software... for free
  • Physics/material science/chemistry/protein folding -> 10x faster experiments with computer simulation (just wait for quantum computers)
  • Bitcoin/cryptocurrency -> 10x better way to incentivize open protocol adoption. 

After a founder uses those basic tools of infrastructure to find an idea that looks like it could be impactful they leverage new funding mechanisms to can scale more quickly. The funding of innovative ideas has long been concentrated in the hands of a few. Governments once reigned supreme in funding things, as we became wealthier this trickled down to wealthy individuals, then to professional risk investors, and now to individuals in the form of crowd sales, Kickstarters, and most recently app-coin sales. If you accept the idea no one can judge innovation at the earliest of stages--that VCs and angels are using basic heuristics to cull bad startups as opposed to picking winners--then new funding mechanisms can. Free flow of capital through crowdfunding, more diversified risk at the seed stage benefits allows for more companies to get created. 

The internet and associated products should help entrepreneurship in general. If history is any guide, these types of innovation should help those out at the edge the most--today's young founders and others that are resource poor. More young founders can start hard tech companies of the future as the speed of iteration, cost of starting, and intellectual capital get easier to access. The more abstract tools get, the more quickly we can go from insight in mind to project in hand. I for one, am excited about this future.


TLDR

Young founders will win because
  1. the nature of innovation in has always skewed young
  2. and will the composition of entrepreneurship stay the same change (more geared towards fluid and less towards crystallized)
  3. the inputs of entrepreneurship are increasingly getting easy for young entrepreneurs to access ie: knowledge.
  4. the tools of development and capital are easier for anyone to acquire

Free Internet and Electricity (And Crypto) Everywhere

Before finals last year, I traveled to Belize to escape school. I felt the full force of the 100% humidity and the sun beating down on our backs at a scalding 97 degrees. Trouble began to brew as our car rental fell through. It wasn't turning out to be the relaxing getaway we thought it'd be. Luckily, we got a car from Pauncho's, a local car rental service, at double the normal insurance premium. We soon pulled away the airport, and set our sites on a long drive. 

Belize is undeniably beautiful. Glancing up from the road, I caught glimpses of lush greenery and huge mountains in the distance. And later in the trip, we spent time in a rainforest tree house, surrounded by the all the coos and croaks from all sides. However, this beauty was juxtaposed by the conditions of the towns we visited. I saw weather-worn houses and one-room schools deprived of access to internet. On the trip, we paid a huge premium for this privilege: $70 for a hotspot and 2GB of data. This was a luxury that many of the people I was surrounded by wouldn't be able to acquire. While meditating on that, I caught up with the connected world. 

I read about how solar energy was spreading around the developing world due to low-cost Chinese panels and about the new release of the 21 Bitcoin Computer. The "21" press release had a quote that stuck with me--"a miner in every chip and device". Sometime while reading this article, a flash of inspiration hit. I envisioned an integrated system to give access to the internet and electricity for free--a solar panel, embedded cryptocurrency miner, battery, and Wifi/3G access point. We would give the device and internet services away for free and earn money by mining cryptocurrency with free solar-generated electricity.

While we have 5 billion phones on the planet, developing nations around the world not only pay the highest costs per capita for smartphone usage but also for merely powering those phones. We know that the smartphone is everyone's gateway to the internet. However, the internet that you and I use at home is not what those in the developing world use. Phones are often unable to update their firmware because the cost of that download alone would eat up an entire month of data. Data plans can cost as much as 37% of a worker's salary per month in the developing world, and in rural areas, this is even more stark. These areas often don't have access to cellular service at all. I know this not only from months living in my ancestral farm town in China but also from this recent experience in Belize.

I recently ran a back of the envelope model to test the feasibility of this design. Thanks to increasing solar panel efficiency, decreasing hardware costs, cheap computing power, new 4G/LTE/Wifi satellites, and Bitcoin, the numbers seem to work. We could potentially give everyone in the world access to today's essential utilities--free internet, electricity, and access to a global financial system. Who knows if this idea will end up working, but the potential seems pretty great :) If anyone has any info to invalidate this idea, please do so; in the meantime, I'll be learning more about the crypto price dynamics, satellite internet, and reliability of hotspots. Then moving on to building a prototype!