Skip to content

9/11 And The Web

September 12, 2011


How we observed this day

Danny Lewin entered MIT’s Ph.D. program, then co-founded Akamai Technologies with his advisor Tom Leighton, all before boarding American Airlines Flight 11 in Boston on September 11, 2001.

Yesterday, Sunday, was ten years since the 9/11 attack on America. The attack left almost three thousand dead and affected thousands more indirectly—some who lost family; some who lost neighbors; some who barely survived; some who tried to help, but could not. Ken and I cannot possibly speak for those who lost loved ones—the closest we come is that Ken’s father was in the North Tower at ground level and escaped unhurt. Countless others watched from afar, but were touched and forever changed by this attack. It is impossible for us to forget where we were that day and what we saw. Impossible.

We thought it right to remember one of the victims of the attack. Every victim has a story, and every one did not deserve what happened that day. But one victim, Lewin, was a member of our community, and it seems correct to remember him. Perhaps talking about his brilliant work will not stop the pain that many still feel—and will feel forever, one telling me (Ken) that he cannot bear to read the part of Lewin’s Wikipedia page that describes his death. Perhaps nothing can help, yet talking about his work seems to us one way to remember and mourn the loss of a brilliant researcher, who otherwise would have had the last ten years to add to our understanding of the world.

Lewin had seat 9B forward in business class, seated among the hijackers. It is believed that he confronted them single-handedly; it is known that he was killed even before the plane turned to follow the Hudson River south toward New York City. This is known because a flight attendant got through on the American Airlines emergency line and reported what had happened, ascribing the killing to a hijacker seated just behind Lewin. Communications between passengers on the planes and their loved ones enabled those on United Airlines Flight 93, which was over Pennsylvania and headed toward Washington, DC, to learn the hijackers’ homicidal intent, and to take organized action against them.

Lewin’s Research on Polynomials

The work by Danny Lewin that first came to both our attention was his STOC 1998 paper with Salil Vadhan. It broadened results on polynomial identity checking by Zhi-Zhong Chen and Ming-Yang Kao which had appeared at the previous FOCS.

Chen and Kao considered the classic problem, given a polynomial {p(x_1,\dots,x_n)} over the rational numbers, how easy is it to tell whether {p} is identically zero? The sense in which the polynomial is given is key. If it were given as a sum of terms, where `term’ means a monomial multiplied by a coefficient, you could just add the terms to see if everything cancels. Things become trickier if it is given by a formula that includes products of polynomials, because multiplying out the products might create exponentially many terms. The polynomial might be given with parts stated to be the determinant or permanent of some matrix of terms. It might even be given as a “black box” allowing you to present arguments {a_1,\dots,a_n} and receive the value {p(a_1,\dots,a_n)}, but not telling you anything about its structure, except possibly a bound on its total degree {d}.

No deterministic polynomial-time algorithm is known for this problem. However, except over small finite fields, the zero set {V_p} of a non-zero polynomial {p} will be small in various senses that a poly-time randomized algorithm can exploit. We have written on this before. The simplest idea is to choose points {a} at random, so that with high probability a witnessing point {a \notin V_p} will be found for nonzero {p}. To gain greater confidence after a sequence of answers {p(a) = 0} that {p} is really zero, one can try more points—at the cost of more random bits.

Chen and Kao’s idea was to define irrational points {r} that are guaranteed by Galois theory to lie outside of {V_p} regarded as a subset of the reals. Because {V_p} is topologically closed, all sufficiently close rational approximations to {r} lie outside of {V_p}, but how close do we have to go? They showed that with high probability over random bits used once and once only to choose {r}, a moderate number {\ell} of leading bits of coordinates of {r} suffice to find a witness {a}. To increase the confidence that {p(a) = 0} really means {p} is identically zero, one need only increase {\ell}. The key is that this does not require increasing the number of random bits. The hitch, however, is that their algorithm is wedded to how the rationals sit inside the reals, and their analysis works only when {p} has integer coefficients.

What Lewin and Vadhan recognized is that beyond the tricks with sums of square roots of prime numbers, and notions of approximation which don’t apply over finite fields, were structural ideas that hold for any field {F}. Their paper explains this beautifully already, and we cannot do better than to quote them as a perfect example of game-changing:

Primes {\longrightarrow} Irreducible Polynomials in {F[x]}
Square Roots {\longrightarrow} Infinite Power Series over {F}
Approximation {\longrightarrow} Square Roots mod {x^\ell}

Well we have led off with this because we are interested in polynomials. We actually have a similar situation with our own paper together with Atri Rudra, which I (Ken) just presented at MFCS 2011 and today at the 4th Eastern Great Lakes Theory Workshop organized by Atri and Alan Selman and Bobby Kleinberg of Cornell. We have one construction in that paper that is algebraically natural but doesn’t give polynomial time, and another that gives nearly linear time but uses coding tricks that work only over finite fields. Can we symmetrize an arbitrary polynomial without losing information in a way that is both efficient and algebraically natural over any field?—perhaps you will see something we’re missing.

The Paper That Begat a Company

One might think that fussing with probabilities and economizing on random bits is not the road to riches, but one would be wrong. The famous STOC 1997 paper with Leighton and Lewin and four other MIT authors that gave rise to Akamai was all about randomized algorithms and their analysis. Usually we would link just the word `paper’ to reduce link-colored words, but here we give the long URL used by Akamai to store the paper in full, because paying reverence to the paper’s subtitle is the point:

http://www.akamai.com/dl/technical_publications/ConsistenHashingandRandomTrees DistributedCachingprotocolsforrelievingHotSpotsontheworldwideweb.pdf

Today we can’t help thinking of “relieving hot spots” as a phrase relating to the war on terror, but anything that makes the Web more redundant and robust will help there too.

The nub is to make the hash mapping of keys to slots robust under adding or removing a few hash buckets. A standard hash function like {h(u) = Au \pmod{N}} will fail because incrementing or decrementing {N} changes the values modulo {N} in unpredictable ways.

Their solution was to create a relationship of geometric closeness between the values that hash to a given slot and an anchor for each bucket, while giving each bucket multiple anchors spread out in random locations over the domain of possible keys. { } When a new bucket is added, it affects only those keys near the locations chosen for its anchors. Deleting a bucket forces re-mapping of only those keys that were close to its anchors. Each affected key will find the next-closest anchor, and significantly, the new anchors in different locations will typically not belong to the same bucket, but mostly to different buckets, thus sharing the increased load evenly.

Ironically the load-sharing feature of Akamai’s network had its first extreme test on 9/11/01 itself. To quote this article:

By 10 a.m. that day, as employees struggled to come to grips with Lewin’s death, Akamai’s servers were being hammered by millions of visitors looking for information about the attacks. Inside Akamai’s network operations command center, engineers’ eyes widened as they watched the situation unfold. “Our employees were in shock,” recalls [then-president Paul] Sagan, who was at the Cambridge offices that morning. “But traffic was going crazy.” Remarkably, Akamai’s network and employees absorbed the stress, so that customers like CNN, which used Akamai to cache and deliver Web content, were able to accommodate the extra traffic without a hitch.

Now the company routinely handles loads that are an order of magnitude higher, and on a typical day about 20% of Internet traffic is delivered by its servers.

A nice explanation of consistent hashing is here. We note that the STOC paper promised a full version proving they only required limited independence, but Leighton and Lewin were soon more interested in field-testing their algorithms. They and their group drew up a business plan, and entered the MIT 100K Entrepreneurship Competition, which was then 50K. They made the finals of six out of 100 entries, but we are told they lost out to an online dog-food marketing scheme that flopped. Oh well.

Akamai and Communication

Now Akamai is a thriving company, providing fast, reliable, secure communication on the World Wide Web. This success stems from smart implementations of novel network routing algorithms, and from good business partnership sense. Much of this owes to current CEO Sagan, as told in the above-quoted article. Bobby Kleinberg worked at Akamai during Lewin’s time as Chief Technology Officer and shares with us some of his memories of Lewin. The company’s success depends on having servers running their algorithms, of course, but these servers must be physically located all over the world. Latency is a big factor in the scheme laid out in the 1997 STOC paper, which needed nothing short of a world vision to realize.

Placing servers physically around the world requires negotiating with many countries, each with their own concerns, languages, and cultures. Even more complex, each country may have distinct technical concerns. For instance, some countries differentiate less between the cost of uploads and downloads because most of their cables go through oceans, and this affects the terms under which Akamai can attach its servers to their systems. This requires communication skills of the first order. Lewin’s nature was to be not only on top of everything analytically but also engaged with everyone personally, and this plus the acumen of those who joined him and Leighton to lead made it happen.

Communication within the company was also vital. As an icebreaker once Lewin, who used to organize softball games for the MIT theory group, arranged a paintball match pitting the four top company officers against all other employees. The other three executives quickly took cover behind Lewin as the hundred on the other side advanced. Using his training and four years of service in an elite Israeli Defense Forces unit, and general sang-froid, Lewin calmly took deterministic accurate aim while being randomly pelted, and picked off quite a few of the assailants by the rules of the game.

We would wish for the world that no one would face a situation any more highly charged than that, as Lewin himself did on 9/11. Several times on Sunday, in person and on the radio, we heard references from ancient books to a future world where people would war no more. That world may seem no closer now than then. However, the presence of entities we need to defend that like the Web are truly world-wide, rather than country-bound, may provide one vital step.

Memorials

Memorials for Danny Lewin include a page attesting his ardent humor maintained by MIT, and a page at Akamai. The ACM STOC Best Student Paper Award is named for him, and also a square in hailing distance of MIT’s Stata Center. Among online memorials we note CNN’s tribute page for him.

Open Problems

We thank Kleinberg also for contributing a theoretical problem from Akamai days that is still open after these ten years. It is from another paper by Lewin and Leighton, with Mor Harchol-Balter. It concerns how long it takes for all nodes in a weakly-connected directed graph {G} to learn about each other while keeping total communication close to the minimum.

Initially every node {u} knows and can communicate with only its out-neighbors {v}. It can send one-or-more {v} its “address book” of out-neighbors, which then become out-neighbors of {v} too, and/or request the neighbors of {v}, which then become merged with its own address book. Making such requests of every neighbor in every round as the graph’s edges grow, however, uses quadratic communication requests which is too much. Each {u} may choose one neighbor {v} in each round, and either

  • “Ask In”: request the address book of {v}.
  • “Name-Drop”: push its address book onto {v}.

The latter may seem anti-social, but they show that when {v} is chosen uniformly at random, it works much better than the former: {O(\log^2 n)} rounds with high probability, versus {\Omega(n)} rounds needed by the former. As their paper’s last page says, the resulting “Name-Dropper Algorithm” was implemented by Akamai.

The open problems are whether in fact every {G} becomes the {n}-clique with high probability in {O(\log n)} rounds, or whether the basic {\Omega(\log n)} lower bound can be raised, or whether some other message-passing strategy can do better. The last page also cites a personal communication from Dick asking how important it is that each {u} know the total size {n} of the graph in advance; note that even when {u} knows all nodes it may still need to inform some {v} about them.

About these ads
19 Comments leave one →
  1. September 12, 2011 2:48 pm

    Thanks for this touching and fitting tribute. While I only met Danny once at an LCS/Akamai “Theory Jeopardy” event (one of the many he organized), it was plain to see that he was immensely talented and creative, and a charismatic community builder. That he died attempting to prevent a horrific tragedy is probably no surprise to those who knew him best.

  2. September 12, 2011 2:52 pm

    The picture of Lewin is the lobby portrait of him at Akamai’s Cambridge, MA headquarters. The image of random numbers was purchased for 1 euro from copyright holder Yay Micro, crediting PixelsAway as source. Doing so removed the “Yay” imprint.

  3. Daniel Bilar permalink
    September 12, 2011 4:31 pm

    What is not known until today (his family is how much of a hero Danny Lewin truly was on 9/11/2001. He died a hero, the first 9/11 victim, trying to stop the terrorists who like him were in business class.

    A leaked Federal Aviation Administration memo written on the evening of Sept. 11 contains disturbing revelations about American Airlines Flight 11, the first to hit the World Trade Center. The “Executive Summary,” based on information relayed by a flight attendant to the American Airlines Operation Center, stated “that a passenger located in seat 10B shot and killed a passenger in seat 9B at 9:20 a.m. The passenger killed was Daniel Lewin, shot by passenger Satam Al Suqami.” The FAA has claimed that the document is a “first draft,” declining to release the final draft, as it is “protected information,” noting the inaccuracies in reported times, etc. The final draft omits all mention of gunfire. Lewin, a 31 year-old dual American-Israeli citizen was a graduate of MIT and Israel’s Technion. Lewin had emigrated to Israel with his parents at age 14 and had worked at IBM’s research lab in Haifa, Israel. Lewin was a co-founder and chief technology officer of Akamai Technologies, and lived in Boston with his family. A report in Ha’aretz on Sept. 17 identified Lewin as a former member of the Israel Defense Force Sayeret Matkal, a top-secret counter-terrorist unit, whose Unit 269 specializes in counter-terrorism activities outside of Israel.

    http://www.upi.com/Business_News/Security-Industry/2002/03/06/UPI-hears/UPI-87441015437383/

    Hear the account on that day and the interview with his family here: http://www.ynetnews.com/articles/0,7340,L-4120871,00.html

  4. September 12, 2011 5:20 pm

    Hi,
    Kutten-Peleg-Vishkin proved that there exists a (deterministic) message passing strategy requiring only O(log n) rounds. Their paper seems to be accessible here: http://www.umiacs.umd.edu/users/vishkin/PUBLICATIONS/spaa3-20-01.ps

    • September 12, 2011 9:10 pm

      Thank you very much! Here is CiteSeer’s PDF copy. I think this answers the clause I added about “or whether some other message-passing strategy can do better”. The two statements about the analysis of “Name Dropper” itself, which came from Bobby K., seem to be left open.

      The KPV paper has a page toward the end on prospects for implementing their deterministic algorithm—I wonder if any tests have compared it to the randomized “Name-Dropper”.

  5. Michael G permalink
    September 13, 2011 12:11 am

    While I have never met Danny, everything that I hear about him paint him as a genius who really cared about others. May his soul rest in peace. It a great loss to loose someone so brilliant , especially so young.

    On unrelated topic, however, as an aspiring theory student who once built large real-life distributed systems for a living before coming back to grad school, I do have some concerns about how we overhype the importance of theory in the practical world (such as the “algorithm that launched a company”). A system like Akamai or Amazon or Google have literally hundreds of algorithms. And, I would be very surprised if consistent hashing made even the top 20 things that differentiates Akamai’s technology from its competition and contributed to its success. Companies such as Akamai or Amazon or Google succeed due to other big ideas: product ideas, new business model, operational innovation, system innovation. It is very seldom that theory is high on that list.

    I realise that a degree of hyperbole helps to convince granting agencies that we have great impact on practice. But our real impact may not be specific algorithms but producing logical thinkers who then apply their rigorous training to solve real world problems unrelated to theory (the real reason why many theory folks are good entrepreuners, hedge fund managers, etc). And, there is nothing wrong with owning up to that and feeling proud about it.

    BTW, I stumbled on this recent paper that explains in great detail (perhaps for the first time?) how Akamai works in terms of motivation, algorithms and architecture. It is in the same publications page referenced above and the paper extensively blogged and twittered recently (which is how I stumbled on to it).

    http://www.akamai.com/dl/technical_publications/network_overview_osr.pdf

    I urge theory folks to read papers of actual systems such as above, whether it be Google, Akamai, or Amazon or VMware, and understand the field in depth before choosing theory problems to solve. It would also put into context where theory belongs and where it might be applicable in such systems. I would say that a lack of understanding of real world systems is hurting theory and is likely to lessen our future impact. Theoreticians spending sabbaticals at these places and coming back with real problem insights would also really help.

    humbly,

    Michael G
    (just a grad student:-))

    • rjlipton permalink*
      September 13, 2011 3:03 pm

      Michael G,

      No question many algorithms, many ideas are used. By it is true that they started with an idealized problem, wrote a theory paper, joined an MIT contest, lost the context, and then still worked on making the theory ideas into a company. Of course the company is much more than “just” implement the algorithm, but they started as theory people. In his sense I think we are fair in citing his background as a theory researcher as playing a key role.

    • September 13, 2011 3:04 pm

      Yes, nicely put and appreciated. My own sense is that the truth is between two PowerPoint hits I get for Google Akamai consistent hashing:

      #2 Aditya Akella lecture 14 of his CS740 Advanced Networks course at UW Madison, 3rd slide from end: “…use consistent hashing — idea behind Akamai!”

      #10 Peter Danzig, keynote at NOSSDAV 2001 meeting, 2nd slide from end: Consistent Hashing…Would a less elegant algorithm suffice? Yes, hit rates are 98–99% anyway, [so] any hash algorithm suffices.

      Anyway, at #6 is Guy Blelloch’s “Algorithms in the Real World” course which we just featured. The paper is the only reading under “Content Delivery Networks”, and the course notes give it as the impetus for Akamai. From a non-academe source, this August 1999 Wired story supports my use of “begat”, see especially the “La Sauce Est Tout” section: “The ingenious algorithm underlying this process is Akamai’s secret sauce, and as the French say, the sauce is everything. “

  6. September 13, 2011 1:07 pm

    Michael, it’s neither necessary, nor feasible, nor desirable that everyone think alike … that said, I’m glad there are graduate students like you in the pipeline! :)

    Please regard this as fan mail for your well-reasoned & well-stated post.

  7. Andy permalink
    September 13, 2011 9:10 pm

    A touching personal account of 9/11 and Danny from one of our own. Notice subtext on how the perception of the Internet changed with 9/11, something I hadn’t realized (but kinda makes sense now). I suppose that it is not just necessary for the technology to be right, but it also must be right at the right time! This why being an entrepreuner is soooo difficult.

    https://www.cs.umass.edu/news/latest-news/911-personal-remembrance-prof-ramesh-sitaraman

    Andy

  8. Michael G permalink
    September 15, 2011 12:20 am

    I agree with Dick’s response and don’t deny the historicity of what he says. I don’t know that much about the history of the company, but it seems to tie up with what little I have read.

    But,unfortunately, Ken’s “research by googling” leaves me a bit cold and in my mind inadvertently promotes the “naive theoretician” image that folks in industry often have of us. Surely, we can just reason out our conclusions from first principles instead of googling?

    First, companies seldom write about what makes them *really* tick, so googling is just a waste of time. It is the blind leading the blind, as what you get is mostly hearsay or marketing or hearsay of some marketing. There *are* few exceptions when someone who *actually* built a system writes a technical paper that makes sense from an **independent** thinking scientific perspective. Good papers like that seems to exist for *some* companies: like the Google Mapreduce paper, or Google BigTable paper or the Amazon Dynamo paper, or the Akamai paper I stumbled on and mentioned earlier. I like those types of papers, since I am finding a lot of interesting theory problems in them that could make an impact on practice (hopefully, god willing, one may become my thesis topic:-))

    Second, we love the “theoretician discovers a new algorithmic secret sauce and changes the world” narrative for obvious reasons. It makes us feel good. What is not so obvious is that companies like them too! Because it *sounds* profound, mathematically complicated and not easily replicable by the competition. That is exactly the definition of a good marketing message that will impress customers, who are largely “lay” people scared of high falutin math:-). The logo of EMC is E=MC^2 for a good reason:-):-) Also, the reason behind the chemist with crazy hair in a lab coat and a test tube advertisements:-)

    But we are not lay people, are we? We can actually understand *both* the systems and the algorithms and reason about them independently. If we think about it rationally, it should be pretty clear that Dantzig is right in that if the hit rates are high (which is true for most web workloads), any reasonable algorithm would work. However, where something like consistent hashing becomes more important is for web workloads with lots of cold objects that are not accessed frequently enough and must be replicated carefully or you will thrash to disk, such as social network content. Having said that, consistent hashing is likely not what made a difference in 9/11, since the workload was probably very very hot (hitrate 99.9% probably), since the same stuff would be accessed by millions of people. (Remember the video of the plane hitting the towers that we played ad infinitum that day?) So, my guess for what would have mattered that day is the component that moves traffic across data centers to spread the load which really has nothing to do with consistent hashing. (Unfortunately, that Akamai paper does not reveal all details of that algorithm.)

    At a higher level, I think by believing the narrative that the cleanly-stated toy problems which we love to think about can/has played a pivotal role in a large real-life system, we are most likely in a fantasy world. But, more importantly the false narrative makes us turn a blind eye on the real highly-important role of algorithms in practice as a conceptual tool, which is a way more important aspect anyway.

    humbly,

    Michael G

    • September 15, 2011 4:57 pm

      That is another very thoughtful post, Micheal. And yet, I am not so sure regarding your starting assertion that “companies seldom write about what makes them *really* tick.” My own observation is that almost all human enterprises (including corporations) work exceeding hard to communicate what makes them “tick.”

      This illuminates (for me) what Renyi said of von Neumann: “Other mathematicians prove what they can, von Neumann what he wants.” This expanded into the (accurate) observation that, having surveyed what could be proved, von Neumann devoted his his mathematical energies to proving what needed to be proved, in services of the vast enterprises of his generation.

      The point is, that with modern informatic and technological resources, pretty much anyone nowadays can embrace the strategy “prove what needs-to-be-proved”, without requiring the extraordinary skills of von Neumann.

    • September 15, 2011 8:59 pm

      I’m sorry that citing four sources expressly to illustrate a range of opinions “leaves you cold”—and I guess it was “naive” to include a reference to Peter Danzig (no ‘t’), who was also Akamai’s VP of Tech in 1999–2001, making your point. Even if this point in the concrete part of your comment renders inaccurate my ascription that “the load sharing feature of Akamai’s network” (described in my previous sentence) was importantly tested on 9/11, can it be fixed by pluralizing “features”? That is, without the fuss and slant on EMC^2 et al. and “everything you read is false” aspects which frankly give me shivers…

      BTW, Dick and I commented simultaneously yesterday by coincidence, and I stand by “begat” based on his defense which you agree with. I should also note that Ramesh Sitaraman of the Akamai paper you helpfully gave is also the writer of the nice reminiscence linked by Andy above. We can take the thermal parts offline (a try on the e-mail with your comment bounced), where also Dick and I have a constructive idea to make more out of your points.

      • September 16, 2011 6:30 am

        [This comment in reply mode---same text as John's below---came out Dickensian tall-and-skinny, and I edited it rather than delete it to avoid a possible deletion cookie. KWR]

      • September 16, 2011 12:43 pm

        Thank you very much for the html fix, Ken.

        Upon checking Akamai’s corporate history, it appears that the GLL/PNP section headings “The Paper That Begat a Company” and “The Company That Begat a Paper” each might have been used, with similarly good justification, in the sense that the paper and the company each were born largely of the other.

        Holy Rashomon, Batman! … surely there’s a many-leveled narrative here! :)

  9. September 16, 2011 6:31 am

    “Dick and I [Ken Regan] have a constructive idea to make more out of your points”

    Oh boy! Ken, whatever that “constructive idea” may be, please let me express the hope that it will become a Gödel’s Lost Letter and P=NP (GLL/PNP) essay.

    This is because I heartily agree with (what I take to be) a recurring GLL/PNP theme, namely, that good math is fundamental to creative enterprise. Similarly, my take on Michael’s point is a variant of an Iowa Sunday-School teaching:

    “God is on our side but he is not going to do all the work”

    As with God, so with good mathematics.

    A special and wonderful aspect of GLL/PNP, over its 300+ essays (so far), has been the sustained grace and constructive humor that recognizes these two great truths are not in opposition, but rather each strengthens and illuminates the other.

    Like Oliver Twist, we readers say, “Please sirs, we want some more!” :)

    • September 17, 2011 4:31 pm

      Your Iowa point reminds me of one of my dad’s stories, the one where someone lamenting unanswered prayers to win the lottery gets the reply, “Help Me out—buy a ticket!” And also of something I’d written in the bounced e-mail, that saying “begat” did not mean theory claiming credit for the diapering and food shopping and schooling and balancing and college saving and the preparation and application of love on a daily basis… :-)

      One concrete idea in that e-mail was that one indicator of what companies feel makes them tick is their patents, which are a matter of public record (findable by Google?—at least stories on the patents are).

  10. September 18, 2011 4:34 pm

    Ken, your reply suggested a fun project: search on Google Patents for key mathematical terms. A search for “category AND naturality” turned up only two patents (out of Google’s database of seven million), but one of these patents was spectacularly interesting: Method and system for self-adaptive code (2002), which arose within San Jose’s glamorously secretive Kestrel Institute.

    … and I for one welcome our new Skynet overlords! :)

    It’s nice too that Kestrel’s patent has a category-theoretic diagram in it; AFACT this is the first such diagram to appear in the patent literature … a mere sixty years after the first such diagram in the mathematical literature (Eilenberg and Mac Lane, Natural isomorphisms in group theory, 1942).

    A similar search for “organic AND separation” (as inspired by a recent GLL essay) found more than 19,000 patents … so perhaps you and Dick and are onto something interesting.

  11. Daniel Bilar permalink
    September 11, 2013 9:31 am

    Dear Readers

    Today 9/11/2013, Danny is figured as the lead story http://www.cnn.com/2013/09/09/tech/innovation/danny-lewin-9-11-akamai/index.html?hpt=hp_t1 with Tom Leighton remembering him.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 1,566 other followers