Updates from Doug Belshaw Toggle Comment Threads | Keyboard Shortcuts

  • Doug Belshaw 6:32 pm on June 18, 2017 Permalink | Reply
    Tags: houses, living   

    Where do you live when you can live anywhere? 

    I’ve just been reflecting on a conversation with someone on Mastodon earlier this week. I don’t know his name, and can only infer that he’s UK-based from his tweets — especially as he’s using a German instance of the decentralised social network! That’s interesting in and of itself, especially at a time that for reasons of ‘security’ (and advertising) there seems to be a real push for our offline identities to be closely aligned to our online ones.

    Running in Circles toot

    The reason this ‘toot’ interested me was because as a family we’ve had reason to reconsider where we live over the past few weeks. I think we’ll stay put for now, for reasons I’m not going to go into (all good!) but it brought back memories of not moving to Gozo three years ago.

    The list that ‘Running In Circles’ gives above makes me realise how fortunate we are in our current position. We don’t really have enough storage space outside, and I’d like more community activities that interest me, but where we live in Morpeth, Northumberland, ticks the rest of the boxes. I know my wife would like a bigger garden, one that allows her to see the kids playing from the kitchen window (which is at the wrong side of current house) but it was big enough for our barbeque yesterday!

    It’s not up to me to judge, but I see so many people — parents of children who are friends with my kids, for example — striving for larger and larger houses. We live in a terraced house that we stumbled across when the move to Gozo fell through. We rented for the first few months but realised that it would be a great place to live on a longer-term basis. We’ve converted the roof, I’ve got an outside office, and in a quiet, leafy spot near the centre of town. It’s great.

    So I guess the point of this point is to consider that what makes you and your family happy and healthy might not be what everyone else is striving for. It might not be covered by the default options provided by online property search websites…

     
  • Doug Belshaw 4:36 pm on June 16, 2017 Permalink | Reply
    Tags: andragogy, development, learning, School of Life, workshops   

    On the importance of transitions 

    I’m a fan of the School of Life. I think they do a great job of popularising key philosophical ideas in relevant, applicable ways. Just check out their YouTube channel.

    As you’d expect, the School of Life caters for businesses, with an attractive learning and development brochure. Their curriculum is broken down into 24 emotional skills:

    School of Life - L&D - Emotional Skills

    These skills are then unpacked on following pages, giving a high-level overview of the two-hour sessions they offer around each one:

    School of Life - L&D - Self Awareness & Supportiveness

    From there, the two hour sessions around a particular skill are then organised into sample playlists:

    School of Life - L&D - Pathways

    This is a pretty standard model. It means that the people delivering the courses get to develop core content they can re-use. Meanwhile, the customer gets to tailor courses based on their priorities/interests.

    What’s missing from all this? Transitions.

    I think I’m right in saying that, for 24 skills, there are 16,777,216 possible ways to link together two sessions. This obviously increases exponentially as you add more sessions into the mix. As a result, you need talented people who can make the transitions between sessions make sense, ensuring the whole day combines to become more than the sum of its parts.

    Obviously, when you’re putting together a brochure like this for people whose specialism isn’t learning and development, you want to hide some of the complexity involved. However, it’s worth drawing attention to it now and again as running a bespoke workshop, just like teaching in a school or university, is as much of an art as it is a science.

    Given the average feedback score (9.1 out of 10) for these sessions, the School of Life not only have great sessions, but great transitions. That’s where the secret sauce lies: people need to know that they’re not having something done to them, but that facilitators are being responsive, and truly catering the day to who’s in the room.

     
  • Doug Belshaw 9:00 am on June 16, 2017 Permalink | Reply
    Tags: documentation, GitHub, Open source, survey   

    What’s the biggest problem in open source right now? 

    Documentation.

    Fig.1 - Problems encountered in open source

    The 2017 Open Source Survey, carried out by GitHub is a valuable source of information. Large parts of it, however, prove somewhat difficult to intepret and act upon given the huge skew in gender:

    The gender imbalance in open source remains profound: 95% of respondents are men; just 3% are women and 1% are non-binary. Women are about as likely as men (68% vs 73%) to say they are very interested in making future contributions, but less likely to say they are very likely to actually do so (45% vs 61%).

    There’s a systemic issue here. While the survey indicates that ‘serious’ incidents have been experienced by a relatively few number of respondents, these have an outsized impact on the community:

    By far, the most frequently encountered bad behavior is rudeness (45% witnessed, 16% experienced), followed by name calling (20% witnessed, 5% experienced) and stereotyping (11% witnessed, 3% experienced). More serious incidents, such as sexual advances, stalking, or doxxing are each encountered by less than 5% of respondents and experienced by less than 2% (but cumulatively witnessed by 14%, and experienced by 3%).

    There’s work to do here. Privileged white males like me who are involved in open source (in whatever way) need to realise that issues that affect anyone in the community affect the whole community. It’s easy to see why “not all open source contributors” isn’t a valid response when you see data like this:

    Fig.3 - Importance to project

    Finally, with one important caveat, this last chart chimes with what I look for when seeking out new software:

    FIg.5 - What open source users value in software

    The reason the ‘support’ option scores so low, I’d argue, is because of the survey methodology. People who are actively contributing code to open source projects are a subset of users of the software. Given that ‘documentation’ would come under ‘support’, it’s ironic that the first and last chart here seem to contradict one another!

    Either way, it’s clear that the open source community still has work to do to make people new to projects feel involved, and for them to know what to do. I’d call this ensuring you’ve got your ‘architecture of participation’ right.

     
  • Doug Belshaw 7:27 pm on June 15, 2017 Permalink | Reply
    Tags: , theme, Twitter   

    Halcyon makes Mastodon look and feel like Twitter 

    For the last seven weeks or so, I’ve been off Twitter and instead been using Mastodon, an open source, decentralised social network. One of the great things about this is that, like the Linux laptop I’m typing this on, it can be configured to my heart’s content.

    While there’s a ‘default’ way that Mastodon looks (which depends on which instance you’ve signed up to), you can a plethora of apps and web views to customise your experience. In this way, it’s like Twitter was in the early days. The difference being there’s no company in the middle looking for an IPO and therefore shutting down ‘competition’.

    I’m signed up to social.coop, an instance of Mastodon for those interesting in co-operatives, and which practices what it preaches; members pay to co-own the instance and have voting rights. You can find me at social.coop/@dajbelshaw. Here’s what it looks like for me normally:

    Default social.coop layout

    Today I discovered Halcyon, which allows you to login with your federated Mastodon credentials. You’re then presented with an interface that closely resembles Twitter’s web interface:

    Halcyon

    This is familiar, but I’m in two minds whether it’s a ‘good thing’ or not. On the one hand, it’s great to have things that are easy to use and don’t have a steep learning curve. On the other, it’s so close to Twitter that it might be difficult for people to understand the difference. They might just write off Mastodon as a Twitter clone where less of the their network are. That would be as shame.

     
  • Doug Belshaw 7:45 am on June 9, 2017 Permalink | Reply
    Tags: data, demographics, General Election   

    Sky Data - vote by age group

    Link to tweet

    I think this tells you everything you need to know about Brexit — and social justice issues.

     
  • Doug Belshaw 10:24 am on June 7, 2017 Permalink | Reply  

    Facebook paranoia 

    Evil Facebook

    Thanks to Eylan Ezekiel for the link to an article in The Guardian that contains a couple of quote-worthy paragraphs: On Facebook, even Harvard students can’t be too paranoid.

    We now learn that Facebook holds a patent for technology – as yet unexploited – that would use your computer’s camera to gauge your emotional state from your facial expression, the better to target you with content and advertising. Just days ago, Facebook was granted another patent for a system that judges your mood from your typing speed. It’s hard to believe I’ve spent the past five years being insufficiently paranoid.

    Some people might think I’ve got a hang-up about the evils of Facebook, when there are other tech giants doing bad stuff too. They’d be right: while there’s abuses and underhand dealings happening all over the place, I think that Facebook in particular is dumbing down users of the web, and also ‘norming’ surveillance culture.

    Posting on Facebook is like writing in your diary and then leaving it in the park. Actually it’s worse: it’s like giving a football stadium permission to show highlights from your diary on the giant screen. Think about it that way – if nothing else, you’ll take a lot more care with apostrophes in future.

    It’s easy to be flippant (although I’m always careful with apostrophes) but just you wait until the shit hits the fan.

    Just. You. Wait.

     
    • Tony Parkin 10:37 am on June 7, 2017 Permalink | Reply

      When the shit DOES hit the fan, the odds are that someone will record it on their phone-camera and upload it to Facebook? We won’t need to wait for long…. 😉

    • Aaron 2:29 pm on June 7, 2017 Permalink | Reply

      Seligman mooted this in his book on positive psychology, Flourish. The problem is that he though measuring our emotions was a positive thing.

      • Doug Belshaw 6:10 pm on June 7, 2017 Permalink | Reply

        Thanks Aaron, and for sending me the quotation via email. I’ve got no problem with measuring emotions. What I have got is when it’s done via secretive technologies, the agreement to which is buried away in labyrinthine terms of service, privacy policies, and user agreements.

        The other thing to consider, of course, is that if you do sentiment analysis, you’re getting the ‘survivorship bias’ of those who have stuck with privacy-infringing platforms such as Facebook…

  • Doug Belshaw 8:34 am on June 6, 2017 Permalink | Reply  

    The Surveillance State will be made possible through Surveillance Capitalism 

    Yesterday, Apple announced ‘HomePod’ – a virtual assistant-cum-speaker in the mould of Amazon Echo and Google Home. Suffice to say that Chez Belshaw won’t be investing in one of these soon – or ever, unless some pretty basic concerns I have about privacy are addressed.

    In Rise of the machines: who is the ‘Internet of things’ good for? Adam Greenfield challenges us to question why we’re so keen to let these kinds of devices into our lives and homes:

    Whenever a project has such imperial designs on our everyday lives, it is vital that we ask just what ideas underpin it and whose interests it serves. Although the internet of things retains a certain sprawling and formless quality, we can get a far more concrete sense of what it involves by looking at how it appears at each of three scales: that of our bodies (where the effort is referred to as the “quantified self”), our homes (“the smart home”) and our public spaces (“the smart city”). Each of these examples illuminates a different aspect of the challenge presented to us by the internet of things, and each has something distinct to teach us.

    It’s an excellent article that neatly summarises some of the problems around so-called IoT devices. The assumption with all of these things is that they serve us. That assumption couldn’t be more wrong: it’s us that eventually bend towards their algorithms and views of the world – programmed with a very particular world view:

    At first, such devices seem harmless enough. They sit patiently and quietly at the periphery of our awareness, and we only speak to them when we need them. But when we consider them more carefully, a more problematic picture emerges.

    This is how Google’s assistant works: you mention to it that you’re in the mood for Italian food, and then, in the words of one New York Times article, it “will then respond with some suggestions for tables to reserve at Italian restaurants using, for example, the OpenTable app”.

    This example shows that though the choices these assistants offer us are presented as neutral, they are based on numerous inbuilt assumptions that many of us would question if we were to truly scrutinise them.

    While we’re (with a modicum of futility) teaching schoolchildren and those new to the web to go beyond the first page of Google, the next wave of devices do away with even that ability to what you’re being presented with. It’s like constantly pressing the “I’m feeling lucky” button:

    There are other challenges presented by this way of interacting with networked information. It’s difficult, for example, for a user to determine whether the options they are being offered by a virtual assistant result from what the industry calls an “organic” return – something that legitimately came up as the result of a search process – or from paid placement. But the main problem with the virtual assistant is that it fosters an approach to the world that is literally thoughtless, leaving users disinclined to sit out any prolonged frustration of desire, and ever less critical about the processes that result in gratification.

    I’m trying to raise my children in a way that makes them thoughtful and critical users of apps and the web. They know that there are different operating systems and browsers. They’re aware that DuckDuckGo protects your privacy, as opposed to Google, Bing, and the like. But faced with a virtual assistant like Siri, all that goes out of the window. All of a sudden, they’re interacting with a ‘thing’ – and even more unaware of the bias and skewing that comes with something that’s been programmed to give you frictionless access to pre-programmed information and services.

    We’ve messed about with Google Now and Siri before, but the main reason I don’t want something like Amazon Echo in my home is that it normalises ‘corporate surveillance’. The conversations that happen in my home are private, and I want them to stay that way:

    Virtual assistants are listening to everything that transpires in their presence, and are doing so at all times. As voice-activated interfaces, they must be constantly attentive in order to detect when the “wake word” that rouses them is spoken. In this way, they are able to harvest data that might be used to refine targeted advertising, or for other commercial purposes that are only disclosed deep in the terms and conditions that govern their use. The logic operating here is that of preemptive capture: the notion that companies such as Amazon and Google might as well trawl up everything they can, because no one knows what value might be derived from it in the future.

    I guess that I’m an early-adopter slowly trying to reform myself. Those on the cutting edge have to put up with a lot, historically. Buggy and half-finished software coupled with clunky hardware is often forgiven because the idea, the vision is compelling. These days, it’s less that the hardware and software is problematic with early offerings — although that of course can also be an issue — it’s more to do with the terms of service and the privacy policy you’re forced to sign up to:

    Put aside for one moment the question of disproportionate benefit – the idea that you as the user derive a little convenience from your embrace of a virtual assistant, while its provider gets everything – all the data about your life and all its value. Let’s simply consider what gets lost in the ideology of convenience that underlies this conception of the internet of things. Are the constraints presented to us by life in the non-connected world really so onerous? Is it really so difficult to wait until you get home before you preheat the oven? And is it worth giving away so much, just to be able to do so remotely?

    Greenfield moves swiftly on from discussing the home to talking about the reallyscary proposition: smart cities. After all, we can choose not to have the devices mentioned above in our homes, but we don’t get that choice when it comes to civic spaces:

    A broad range of networked information-gathering devices are increasingly being deployed in public space, including CCTV cameras; advertisements and vending machines equipped with biometric sensors; and the indoor micropositioning systems known as “beacons” that, when combined with a smartphone app, send signals providing information about nearby products and services.

    The picture we are left with is that of our surroundings furiously vacuuming up information, every square metre of seemingly banal pavement yielding so much data about its uses and its users that nobody yet knows what to do with it all. And it is at this scale of activity that the guiding ideology of the internet of things comes into clearest focus.

    Quite apart from the fact that I just don’t want to be tracked thank you very much, this is a social justice issue. While advocates of smart cities see data as neutral, we’re fully aware that it’s nothing of the sort:

    There is a clear philosophical position, even a worldview, behind all of this: that the world is in principle perfectly knowable, its contents enumerable and their relations capable of being meaningfully encoded in a technical system, without bias or distortion. As applied to the affairs of cities, this is effectively an argument that there is one and only one correct solution to each identified need; that this solution can be arrived at algorithmically, via the operations of a technical system furnished with the proper inputs; and that this solution is something that can be encoded in public policy, without distortion. (Left unstated, but strongly implicit, is the presumption that whatever policies are arrived at in this way will be applied transparently, dispassionately and in a manner free from politics.)

    As Greenfield notes, every aspect of this approach is questionable; you can deploy as many sensors as you want, but they can only capture what they were designed to capture. There’s no way they can gather enough information to adequately base policy upon them. As such, all data is subject to interpretation:

    Advocates of smart cities often seem to proceed as if it is self-evident that each of our acts has a single, salient meaning, which can be recognised, made sense of and acted upon remotely by an automated system, without any possibility of error. The most prominent advocates of this approach appear to believe that no particular act of interpretation is involved in making use of any data retrieved from the world in this way.

    But data is never “just” data, and to assert otherwise is to lend inherently political and interested decisions an unmerited gloss of scientific objectivity. The truth is that data is easily skewed, depending on how it is collected. Different values for air pollution in a given location can be produced by varying the height at which a sensor is mounted by a few metres. Perceptions of risk in a neighbourhood can be transformed by slightly altering the taxonomy used to classify reported crimes. And anyone who has ever worked in opinion polling knows how sensitive the results are to the precise wording of a survey.

    We already see this in the workplace, and in schools. Any time that complex forces and interactions are reduced to a single data point, we’re in trouble. Unsurprisingly, these reductionist approaches tend to favour those who are already privileged, and marginalise minority (or unpopular) voices.

    If the formulas behind this vision of future cities turn out to be anything like the ones used in the current generation of computational models, life-altering decisions will hinge on the interaction of poorly defined and subjective values. The output generated by such a procedure may turn on half-clever abstractions, in which complex circumstances resistant to direct measurement are reduced to more easily determined proxy values: average walking speed stands in for the “pace” of urban life, while the number of patent applications constitutes an index of “innovation”, and so on.

    Quite simply, we need to understand that creating an algorithm intended to guide the distribution of civic resources is itself a political act. And, at least for now, nowhere in the current smart-city literature is there any suggestion that either algorithms or their designers would be subject to the ordinary processes of democratic accountability.

    The concept of ‘smart cities’ is being pushed by governments who want to know more about its citizens and by for-profit companies looking to monetise data and sell devices. We need a critical third voice in there, one representing the people. My concern is that the recent terrorist atrocities will pave the way for a surveillance state, made possible through surveillance capitalism. Right now, that sounds like a bit of a nightmare.

     
  • Doug Belshaw 7:40 am on June 6, 2017 Permalink | Reply  

    The threat (and promise) of crypto-anarchism 

    In an article for The Guardian entitled Forget far-right populism – crypto-anarchists are the new masters, Jamie Bartlett outlines something that I think isn’t even remotely on the radar of most people.

    Bartlett explains how the surge in cryptocurrencies such as Bitcoin are the tip of the iceberg when it comes to an approach to the web that’s a couple of decades old. While Theresa May and her ilk want to decrypt everything, crypto anarchists want to, as their name suggests, encrypt everything. Not only that, they want to do so in a way that makes the idea of the ‘State’ irrelevant:

    Crypto-anarchists are mostly computer-hacking, anti-state libertarians who have been kicking around the political fringes for two decades, trying to warn a mostly uninterested public about the dangers of a world where everything is connected and online. They also believe that digital technology, provided citizens are able to use encryption themselves, is the route to a stateless paradise, since it undermines government’s ability to monitor, control and tax its people. Crypto-anarchists build software – think of it as political computer code – that can protect us online. Julian Assange is a crypto-anarchist (before WikiLeaks he was an active member of the movement’s most important mailing list), and so perhaps is Edward Snowden. Once the obsessive and nerdy kids in school, they are now the ones who fix your ransomware blunder or start up unicorn tech firms. They are the sort of people who run the technology that runs the world.

    Understandably, the ‘deep state’ and institutional antibodies are going to kick in to fight against crypto-anarchists. In the end, though, politicians aren’t going to have much choice, argues Bartlett:

    At present, technology stands outside the messy business of politics, but in a couple of elections’ time, AI, big tech, the sharing economy, will be discussed as angrily as immigration or the NHS now. Does anyone seriously believe that Jeremy Corbyn or Theresa May or Tim Farron or Nicola Sturgeon have the foggiest clue about any of this, and what to do about it? (I’ve not even mentioned climate change, synthetic biology, the continued mass movement of people, billions of connected internet-enabled devices.) To most politicians – even the left, which once imagined that its “white heat” would forge a better world – technology is mainly viewed as a job creator or deliverer of efficiency. The phrase “centre of innovation” is the digital equivalent of motherhood and apple-pie: no right-minded politician could ever oppose it. True, things are slowly changing and there’s more about digital technology in this round of manifestos than ever before: the Conservatives promise a digital charter, the Lib Dems mention artificial intelligence, and Jeremy Corbyn launched a special “digital manifesto” last year. Maybe I’m expecting too much, and maybe citizens don’t care enough either. But none of this yet amounts to a vision that matches the scale of what’s going on.

    What’s scary from my perspective is that, on the whole, people aren’t getting more digitally savvy, they’re getting less so. I see a retreat to shiny walled gardens such as Facebook and Apple services that make everything seamless. Users no longer have to think. Decentralisation and the circumvention of gatekeepers only leads to good things if citizens can make informed choices. I don’t think that’s currently the case, and unless something radically changes, it’s certainly not going to happen in the future:

    [P]erhaps a better comparison than the 1930s for our age is the 1820s. That period witnessed what must have felt at the time like unprecedented change and confusion: the onset of industrialisation, political revolution and counter-revolution, great leaps in science, and the first railways. A British prime minister was assassinated. Luddites smashed machines, fearing that the power loom – that generation’s artificial intelligence – would cause mass unemployment. But the turmoil and instability of the last industrial revolution did not thrust us inexorably into the arms of tyrants. It did however shake up old assumptions as never before, stimulating a flowering of ideas, some of which were stirrings of the modern world: working-class consciousness, extended (albeit still limited) suffrage, Factory Acts, socialist theory, Catholic emancipation and utilitarianism.

    Up until now, my fears about technology in the future haven’t been groundless, but things nevertheless haven’t turned out as bad as I thought they perhaps could. One of the reasons for that has been the ‘deep state’, the institutional antibodies, slowing things down. What’s described by Bartlett in this article, however, is technology coming directly for those antibodies, destroying them, and building a new world.

    Anyone for Bitnation?

     
  • Doug Belshaw 6:23 am on June 1, 2017 Permalink | Reply  

    Manson’s Law of Avoidance 

    I finally finished Mark Manson’s The Subtle Art of Not Giving a F*ck last night. It’s an interesting book, full of advice that’s actually pretty useful.

    Perhaps the most useful section for me was where Manson names (after himself, of course – he’s a superstar blogger) a ‘law’ which I’ve definitely noticed both professionally and personally:

    The more something threatens your identity, the more you will avoid it.

    I’m aware of this in myself, members of my family, colleagues, the electorate, and almost everyone I’ve ever met. The problem is, of course, that most people’s identities are an incoherent mish-mash of values, pragmatism, and genetics.

    Without reflection and growth built into our lives, avoiding things that threaten our identity makes us tribal, irrational, and unwilling to try out new ideas and approaches.

    I’m not a fatalist. I believe that we can and should be able to change what we believe without being labelled a hypocrite. But ‘hypocrisy’ is something that those on the political left seem hate more than anything – meaning, of course, that we’re subject to the politics of identity just as much as those on the political right.

     
  • Doug Belshaw 12:43 pm on May 26, 2017 Permalink | Reply
    Tags: , evidence   

    Do we have a problem with 'evidence' in education? 

    I was alerted this week to a comment on a Google Doc I’d created earlier this year. It was draft post for DML Central in which I tried to make subtle point about the problem with ‘evidence’ in education. However, as was pointed out by some people, it came at an awkward time, with Trump’s inauguration, etc.

    The quotation below gives a taste of it. I’m happy to have a link to it from my own site:

    Saying that ‘evidence’ (and I’ll continue to use it in scare quotes) is the best thing upon which to base decisions is exactly the same neoliberal narrative that Silicon Valley companies use when they say they’re trusting algorithms based on data. There’s an appeal here to a manufactured, ersatz objectivity that attempts to short-circuit all forms of debate and discussion. After all, no-one wants to be accused of ‘prejudice’. The idol has been fashioned, now we must all bow down to it.

    Click here to read the article in full

     
c
compose new post
j
next post/next comment
k
previous post/previous comment
r
reply
e
edit
o
show/hide comments
t
go to top
l
go to login
h
show/hide help
shift + esc
cancel
css.php