Why People Don’t Trust Companies (or at least don’t trust their publicity)

I was at an all-day conference on online PR.  I’m not a real PR person, but I drive Valhalla’s PR.  I also know what I don’t know, so I hoped I’d get something out of an all-day klatsch on measuring PR effectiveness online.

Good information, for the most part.

But the bloggable thing was the exercise we did in the early afternoon: each table in the big room had to handle a synthetic online publicity crisis.  A video was uploaded to YouTube showing child laborers in our (fictional) coffee plantations in Brazil.  Kids saying, “Oh, yeah, I don’t get injured most of the time.”  Stuff like that.

We had five minutes to react, and then found out that our own people said the facts of the video were probably authentic.  And then moms began to blog about us online…

I said to our table, “why not just tell the truth as we know it: yes, the footage is genuine; yes, this is a situation we’re going to get on top of; yes, we are acknowledging it.”

Everyone at the table was horrified: we couldn’t do that, it would “escalate” the crisis.

And no one else in the big room of 300 brought it up.  An acknowledgement wasn’t even on the table.

My wife tells me I’m nerdishly honest, and there’s something to that.  If someone had laid out a plan to acknowledge the damaging publicity in some face-saving way, it would have been an improvement on what I was suggesting.

But everyone’s response was to “keep it from spreading”, just the thing we had been told in a panel an hour before was the way _not_ to handle a crisis.

Oh, well.  I guess there are nuances to PR we amateurs don’t get.

Wisdom of Fights?

A lot of attention has been paid to the “wisdom of crowds”, with great discussion about whether, when, and how crowdsourcing gives accurate appraisals of situations.  We are the wiser for it.

But very little talk about another widespread belief, and perhaps a distinctively American one: I call it the “wisdom of fights”.

I thought of this earlier this week watching yet another discussion panel where the MC clearly believed his job was to get the panelists to start disagreeing with one another.

Why?  Is there some intrinsic virtue to disagreement?

It’s a widespread belief.  Our justice system believes that both defense and prosecution should unabashedly attack one another’s positions, with the clear implication that this process will surface everything a jury needs to reach a decisions.  The judge is required so the combatants will fight fair, but there’s no notion that the fighting itself is suboptimal.

Politics: the debate format has pretty much supplanted the speech format.  If we let Romney poke holes in Obama’s positions and Obama poke holes in Romney’s, we’ll know as much as if we had read through thoughtful presentations of each of their positions and then come to our own conclusions.

“Let’s you and him fight” is a very popular news format today, and most of the criticisms decry the lack of civility in the format, not the lack of veracity.

What makes science work is that both sides agree that a certain experiment will falsify a theory if it goes wrong.  Because the test is connected to the theory as a whole, something of significance takes place in the disagreement.  It’s profound disagreement.

So much of the “wisdom of fights” disagreement is shallow: it’s finding out that someone didn’t publish his tax returns, that someone won’t answer a certain question, that someone is vulnerable to a humiliating analogy or insult.  The disgreement isn’t under test in any way, except in the trivial sense that someone who stands up under repeated insult has some kind of staying power.

The wisdom of fights is very suspect.

Gordon’s Law

Some years ago, as a soon-to-be-ex-AI guy, I came to a realization that I immodestly named “Gordon’s Law”: it’s easier to program us to act like computers than it is to program computers to act like us.

If Gordon’s Law were not so, we would have voice recognition instead of “interactive” voice menus (“press 3 if you’ve despaired of leaving this menu”, etc.).  We would have automatic Root Cause Analysis rather than trouble ticketing systems.  We would have online advertising tailored to our wants and current projects rather than “personalization”.

To be sure, there is Watson, and there is Deep Blue, and my wife told me yesterday there’s some software competing for crossword puzzle champion of the world.  But in some sense — and I include Siri  here — these are parlor tricks.  As Joseph Weizenbaum found out years ago with the software psychotherapist Eliza, there are some clever ideas that simulate humans to humans.  They don’t wear well.  There’s talk of having Watson do medical diagnosis, but there’s also talk of people wanting to throw their iPhones out the window when it turns out Siri really doesn’t do a very good job at all of understanding what we want or what we want to know.  And if Watson ends of doing decent medical diagnosis, I’ll eat my hat.

Why should Gordon’s Law be true?  Aren’t our brains “just” meatware?  Isn’t everything, as Stephen Wolfram says, a computation?

I don’t know, but I do know that we work well together — information devices and humans — when we do what we’re each good at.  We don’t pretend to be machines and they don’t pretend to be humans.

The Metadata Problem

I am not a metadata expert.  I have a couple of friends who could run circles around me in terms of depth and breadth of their experience.  But I do have opinions.

I’ve always thought that the logical person to append metadata — the person who brings the metadata in — is also the least likely to person to know which metadata will be of interest.  Downstream, the consumers of data will have their separate — and diverse — metadata “agendas”, if you will.  The originator doesn’t know what those agendas are (and probably can’t know, since it changes over time).  And, of course, the consumers of data don’t know what metadata apply to a particular dataset without examining it.

In addition, the task of appending metadata is an add-on: it’s something extra you have to do.  What incentive does the originator of a dataset have to do this, other than charity?

Tagging systems like delicio.us have solved a part of this problem by a bottom-up system of tagging where metadata are tagged onto datasets retroactively by any user of the system.  These systems don’t satisfy metadata zealots because the vocabularies aren’t controlled, but, as the Wikipedia article on tagging says, things work out.  the vocabularies are usable and typically converge, or at least don’t diverge too badly.  The crowd is, if not wise, at least not clueless.

It would be even better if there weren’t a separate tagging operation at all.  In a no-tagging operation, some workflow that the user was going to do anyhow would implicitly add metadata.

Typical use case here: when a user drags an email to a “junk” or “spam” folder, the mail management systems can infer that the email can be tagged as junk or spam.

I struggle a lot to get proper metadata in my personal information cloud, by dragging emails to folders and tagging.  The payoff is that search works pretty well for me in tracking things down when I need to.

Your thoughts?

Connected TV

Reading a bunch about marrying Internet and traditional TV today, trying, among other things, to suss out how the ecosystem is going to develop.

One insight I had today: people will not prefer smart TVs, if they end up preferring them at all, because they’ve got one fewer box. The history of phones, smartphones, and now tablets shows that people pick their boxes because of functionality, not box count. People cheerfully carried around blackberries and dumb phones together for years, one for email, one for voice. Today people have a phone and a tablet and a laptop, all for slightly different use cases, each picked for excellence of function.

My guess would be people will do the same for TVs. We will cheerfully combine legacy set top box, new box, and maybe even smart TV, if each excels at some purpose we want.

Money is a vector, not a scalar

We were having a discussion about “throwing good money after bad” the other day, and I found myself blurting out “well, after all, money is a vector, not a scalar.”

I’m sure you all remember (from your linear algebra class, perhaps) the difference between a vector quantity and a scalar. A scalar quantity has a magnitude while a vector has a magnitude and a direction.

“Good” money and “bad”. What are these but an additional dimension for money? In a bad investment the quantity of the money grows while its goodness shrinks; in a good investment they grow together. Additional money in a bad investment grows smoothly in quantity but has a discontinuity as it leaps from bad to good.

There are lots of discussions about money that acknowledge its vector nature. “Dumb” money and “smart”. “Patient” money. The “velocity” of money. “Easy” money (large first derivative of money with respect to effort).

Maybe just a dumb metaphor. Your thoughts?

Private Equity Fund Performance

Bronwyn Bailey, who heads Research over at the Private Equity Growth Capital Council, turned me on to a great research paper by Harris, Jenkinson, and Kaplan on private equity performance.  The authors have the mildest of goals: to say what we actually can say about private equity performance.  They then do what all of us should have done — do some careful time series analyses of data from the few research sources available.

Their results?  “Average buyout fund returns in the U.S. have exceeded those of public markets for most vintages for a long period of time”.  And “average venture capital fund returns in the U.S., on the other hand, outperformed public equities in the 1990s, but have underperformed public equities in the 2000s”.

Worth a careful read over the weekend, I think.  And, if it’s not asking too much, maybe a bit of a counterpoint to the witchunt about private equity fund compensation.

Connected Everything

Connected Everything – Our theme for last year’s Mobile Future Forward was “Connected Universe, Unlimited Opportunities.” It was one of the central themes of this year’s CES (and is likely to be for many more years). From health monitors to Sony Vita, from treadmills to autos, connectivity is driving new features, behavior, and hopefully consumer demand.

This prediction from Chetan Sharma’s thoughtful review of CES struck a chord. As the Internet becomes the World Network of choice, replacing closed and proprietary networks with an open multiple-purpose one, the number and kinds of devices hanging off the network proliferate.

Valhalla Partners StorageFest II

We had our StorageFest II on Jan. 10 up in NYC.  Fantastic event.  As Dave Vellante of Wikibon said:

Thinking about #storagefest. Best small event I’ve attended in a while. Practitioners, VC’s, BoD’s, entrepreneurs and pundits. #winners

— dvellante (@dvellante) January 13, 2012

Follow other traffic on the event at #storagefest.

(One of) my big takeaways from the event was: the different levels of the data stack have to talk to one another.  We got some Big Data storage clients talking with storage experts at our event, and the results were: Big Data needs storage that serves various needs, not just the same old file and block workloads.

Interested to see how that works out over time.

Thoughts?

Benefit from my 35 years of tech industry experience