Web Observatory Facets

Feb 2014

Here is a Concept Map for an Observatory highlighted to show which of the features/concepts have been implemented in a particular case


Here is a first pass at generating the elements of a faceted hierarchy for Web Observatories – these are concepts/foci generated from a textual/thematic analysis of academic papers and other materials around the design and implementation of Observatories

Newsflash – the future will be different and there will be “technology”


The difference between HOW do we do this and SHOULD we do this…?

Forecasting for 2035 (after Accenture’s team are retired and unavailable for comment) seems a bit “safe” to me .. should we take from this that the world is going to be different in 20 years and technology will play a big part … really? I’m already guessing that 2055 is going to be different again and technology will be right there too .. (might just live to see that and post a triumphant “I-told-you-so” on my neurally-embedded positronic IOH (internet of humans ™ browser). How about something on the social impact of this..?

I’d be interested in understanding what a 40% increase in productivity does to the NUMBER of people employed (~40% fewer perhaps in some cases?) and how this affects our ability to grow markets when fewer consumers may be working. Also given that ownership of large AI resources is very capital intensive (ie not sitting with the labour force) who will be benefitting from all this extra money when increasing amounts of work are done by machines and corporations are not currently famous for paying much tax? To simply assume all will be well if we move humans to a “creative” role is to naively assume that AI won’t take this role too and ignores the possibility of automated design and creation. As with the Turing Test, the arbiter of whether AI (Artificial intelligence) is indistinguishable from natural intelligence is the consumer (humans) so if the consumer can’t tell the difference – why not AC (Artificial Creativity) ..? Blocking progress on principle is objectionable but not thinking about where the path we are on may lead us socially may be fatally naive.

Stone Tablets 2.0

University of Southampton have come up with a new technology to encode VAST amounts of information on a glass disk that should be impervious to degradation..stone tablets 2.0


Interesting idea. Stone tablets 2.0

The reason that stone tablets were so successful (seems to me) is that they were resistant to decay/corruption and have a human sensory interface ie they’re free from needing a technology stack – they just have an encoding context. People still create stone memorial tablets (plaques) today.

Hard disks, memory sticks or even encoding information on DNA seem to fail the test of both hardiness / resistance to entropy AND the problem with not being able to perceive the information directly.

The question is whether these slices can be read/perceived without a technology stack taller than the empire state building.

If you need an exotic “xargon particle reader” to even perceive the encoded data I would say it’s a fantastic development but as you say – it’s not clear anyone would be able to “restore the backup”

I think the slices would need to be stored with big visual hints – on stone tablets maybe (lol) – that there was more to be found on the slices. Something amounting to a giant red arrow pointing to the slices and saying “look here”

With thanks to Chris Gutteridge for some interesting ideas to start the day:

– would descendants, future archeologists even recognise that the disk was a backup
– would they know how to restore (activate it)
– could they read/decode it

Of Hammers and Nails

Maslow (and others including Kaplan) suggested that when all you own is a hammer everything starts to look like a nail.

My new Brownian Law states:

Maslow is right but don’t forget that when you are surrounded by nails everything can start to look like a hammer …

Beyond a play on words there is a research finding here based on the convergence of multiple parties towards the idea of the Web Observatory but, interestingly, for diverse reasons. So diverse, in fact that the idea they are converging towards may in fact be a boundary object (Bower & Starr) – a flexible concept/device which allows interpersonal agreement at a high level whilst allowing substantial divergence/flexibility at the detail level.


A Web Observatory or A Web Of Observatories – its all about shadows

artwork by Tim Noble and Sue Webster @ http://www.thisismarvelous.com/

One of the big questions I hear coming up again and again is around clarifying this difference – people often talk about A Web Observatory and then (without drawing breath) throw in a comment about THE Web Observatory.

Is this the same thing? Should we care about the difference between A and THE..?

Let me point out the difference between A web server and THE Web ..

  • You can own/manage a web server – no-one owns or manages the Web
  • If your web page or web server goes away only your part of the Web disappears

In effect, you can think of the Web as something that emerges from the Web Servers that are in operation – its the shadow that is cast by all web servers and Web content. Sometimes a shadow looks very different from the individual pieces that combined to cast it.

THE Web Observatory is the shadow that is cast by all the individual Observatories, Datasets and Apps that are in operation.

Web Observatory in 100 words

Web Observatory in 100 words:
The Web Observatory is conceived of as a scientific tool for research into different aspects of Web data.

It can be thought of in two distinct pieces:

The first is a standalone observatory in which a particular organisation chooses to gather and curate data on a particular subject and to offer services, applications and analytics around this topic.

The second, more novel variation, is the interoperation between multiple observatories which allows researchers to gather data from multiple sources to create synthetic datasets and insights that would not be possible from a single system or perspective.

So why a data catalyser ..?

Thinking about the connected digital economy catapult (CDEC) platform that was previously called the trusted data accelerator (TDA) and now re-branded as the Data Catalyser.

The question is what does a system like this do that existing analytics systems or Web Observatories do not do? The answer is quite straightforward:

There are many collaborative systems which employ open data for public shared benefit and many isolated systems which search for private market insight on closed data, there are however few, if any systems that successfully allow collaborative, closed data work: that it allowing temporary alliances, and temporary sharing of data sets through a neutral third-party for sharing of benefits though NOT of IPR.

We are all forgetting ..

Original CBR Article

Take a look at the original CBR article about a Kaspersky survey and then see if you understand my rant below …

[Caveat – my comments are based on the CBR report here as I haven’t read the Kaspersky report (which I can’t find on-line and annoyingly isn’t linked in the article). Other Kaspersky reports/surveys I’ve seen seem fairly well prepared constructed so I am assuming this is an issue with the CBR article. ]

Digital Amnesia TAKES HOLD!!!! Shock horror ..

Oh dear … whilst ensuring you don’t lose your data IS important and I understand that Kaspersky wants to sell me perfectly good services to help prevent that, this article is nonetheless rather disappointing in the way the “evidence” (sic) is unconvincingly thrown together.

– Telling me that 87% of people can’t remember their kids school phone numbers (now) and that smart phones are “crippling” our memory seems to heavily imply that we should infer that this is worse than earlier performance. Is it? Did it used to be 97% and actually smart phones are IMPROVING our memory. No way to tell.

This is like writing a piece about a man and getting excited that now he weighs 87Kg .. so what? What did he used to weigh? Is he starving or eating himself to death? See the problem? There may be an issue or worrying trend here but it would nice to have some evidence that was organised in a way that said that and offered some insight.

– The article seems to imply that using tools to support your memory is somehow a modern phenomenon. I closed my eyes and mentally replaced the idea of “Smartphone” with “FiloFax”, “Organiser” or “Address book” and guess what? I’m recalling how we all used to be “devastated” if we lost our organiser or our address book (that also had no backup). Just because the device/tool has batteries would seem to make very little difference to the impact on your recall: either the user relies on a portable (ubiquitous) offloaded record of the information or they don’t.

– The article wags a finger at us because we can’t remember our kids data. Again, it implies that we USED to remember but don’t any more. Nonsense. Kids previously didn’t have this much data: mobile phone numbers, email addresses, twitter handles, snapchat ID etc etc so this is another poorly constructed comparison. The amount of data (in this case proxies for identity) that *could* be memorised has been growing sharply ever since the birth of the Web and more sharply still since the advent of social networks and mobile platforms. And yes – it is harder to remember more things than fewer things…

I suspect the Kaspersky report is probably rather better constructed than this summary but that we couldn’t remember all this stuff even if we wanted to is surely “information overload” and not “amnesia”..?

Also as far as I understand it, people who constantly practice remembering things rather than off-loading them have better recall than those who don’t practice. Perhaps we should drop our technology (and our Kaspersky products!!) and just practice remembering things

.. oops