Sharing proto-patterns developed through Data Portals and Civic Engagement design sprint
In the last post, I outlined the first phase of our design sprint on Data Portals and Citizen engagement. In this post, I’ll summarise three areas we identified for future design work, and introduce a set of proto-patterns that describe some of the ideas we developed.
As we explored in one of the preparatory essays for this sprint, addressing the problems with portals is unlikely to come from just piling features onto the existing software stacks that sit at the middle of the ‘portal pinch-point’. Instead, it may require doing less, but better. And it may call for re-framing the portal as a services, as much as it is software.
By carrying out further user-journey mapping for both an active citizen looking to discover data, and for the data stewards maintaining data on a portal, and following two iterations of a “notes & sketches -> crazy-eights -> storyboarding” process, we’ve developed proposals that set out potential ways to bridge three key gaps:
• The data discovery gap - if it exists, users should be able to find the data or information that meets their needs.
• The data quality gap - the data users discover should be of the highest quality and fitness for purpose that it can be.
• The engagement gap - users should leave their engagement with portals with greater confidence to use data effectively as part of their civic engagement journey.
To try and summarise the range of ideas that were developed through this process I’ve turned to the idea of design patterns: documenting the particular components that might be assembled as part of a ‘data portal as a service’ along with the context these patterns respond to, and some of the other patterns they relate to. Given the scope of our design sprint, these are, at best, proto-patterns: drafts designed to highlight where future approaches could lie, rather than tried-and-tested solutions for bridging key data gaps. The patterns themselves are captured in this AirTable where you can explore the relationships between them, and a selection are embedded below. The images in the patterns are mostly sketches made during the design sprint.
Before we turn to the proto-patterns we propose, it might be useful to explore a few of the anti-patterns we named as examples of how a dataset and technology-centric (rather than user-centric) approach to portals has led to a lot of effort going into a number of (potentially) dead-end approaches. For example:
List all the re-uses of a dataset - (ANTI-PATTERN) Creating a comprehensive list of dataset re-uses is demands a moderation and curation role few portals are placed to take on, and leads to missed links, dead-links, and an even greater maintenance burden.
Build an list of all the datasets - (ANTI-PATTERN) Focus on listing all the datasets an organisation holds, including those not yet made public ties up energy in a focus on quantity over quality, and often delays action on making data more usable.
Provide generic automated data visualisation - (ANTI-PATTERN) Attempts to provide 'preview' tooling that allows users to directly access and visualise the contents of any dataset on the platform rarely provides for a satisfying user experience. Effort is better placed on summarising data, and leaving visualisation to bespoke tools or analysis.
Many of these (anti-)patterns have developed over time as a result of trying to retro-fit some user-focussed features into already existing portal platforms - rather than stepping back and redesigning the wider portal user experience, taking into account all the technical, process and people elements that go into improving access to data. While the list above is far from exhaustive, when set against the proto-patterns below, it hopefully helps show the different kinds of directions a service, and user-focussed design approach can take portal development in.
Ideas surfaced in the sprint address data discovery from a number of directions, from proposing updates to the search user experience (UX) on the portal as a technology platform, through to proposing a ‘data guides’ service made up of people who could have a conversation with people seeking data, and help them to refine their search strategy, or to signpost them to potential data resources.
You can explore the draft discovery patterns above, or view each one here:
Tailor the search experience to the user - Interfaces can invite users to more clearly state the parameters of their search, providing interface elements to more easily navigate temporal, geographical, topical and data-type facets.
Provide onward journeys - Whether a user has discovered the data they need or not, offer a next step they can take on their journey.
Connect people to 'data guides' - A data guide is someone able to talk with a potential data user, and to help them better shape their strategies for data discovery. Guides may be able to signpost to particular resources or approaches to make better use of data.
Publish datasets alongside analysis - Instead of linking from dataset meta-data to uses of the data, the link should go the other way: key uses of a dataset should link to the data, providing meta-data in context, and allowing search to discover data where it is being used.
Provide a dataset cart - Allow users to add datasets to a 'cart' which they can then review before downloading or accessing their selected data.
Many of the ideas developed to address the data quality gap focus on proving better tools and processes for data stewards, placing more emphasis on the important role they play in designing and maintaining data, and encouraging more thought about providing datasets as a service. These patterns also address some of the organisational challenges of sustaining engagement with, and resources for, data portals and data publication activities, as well as exploring ways to improve the feedback loop between potential data users and data stewards.
You can explore the draft quality-gap patterns above, or view each one here:
Provide a Data Publishers Toolkit - Data publishers should have access to a set of resources that help them work step-by-step through the process of publishing and maintaining a data resource.
Publish a 'Data Yearbook' - Establish a yearly 'publication' which features information on dataset updates, featured datasets, and planned activities for the year ahead.
Create a business case generator - A business case generator will help data stewards to identify the drivers for data publication, assess the resources required for quality publication, and to clearly set out proposed activities for approval.
Create a publisher risk assessment tool - A risk assessment tool will help data publishers identify any privacy or other risks related to data publication, and to create an action plan to address these.
Publish a changelog - List all the recent updates to a dataset, including new releases of data, changes to the quality control process, or changes to the dataset schema.
Scaffolded dataset feedback and requests - Help users to write good feedback or data requests, and support organisations to respond well to these requests.
Develop a service standard for datasets - Datasets can go through a draft, alpha, beta and live cycle in order to better shape them to meet identified user needs.
New roles and ways of working were central to the ideas explored to address engagement gaps: building on the idea of the portal as a switchboard (C.f. Anastasiu et al.), and ideas of community management as a means to support many forms of engagement, including in physical space. We also looked at how supporting practices, like maintaining a roadmap of planned data improvements, or capturing metrics that help data stewards better understand how their data is used, can feed into a more engagement-focussed approach to open data.
You can explore the draft engagement-gap patterns above, or view each one here:
Provide a switchboard service - A support switchboard would connect people with relevant forms of support to help them use data: from a short call with a 'data guide', through to commissioning detailed data science support.
Hire a community manager - A community manager acts as a bridge between data stewards in an organisation, and potential data users outside the organisation: organising events and activities, brokering conversations, creating resources, and making connections that improve discovery, quality and use of data.
Connect people to 'data guides' - A data guide is someone able to talk with a potential data user, and to help them better shape their strategies for data discovery. Guides may be able to signpost to particular resources or approaches to make better use of data.
Provide recipes for data analysis - Step-by-step recipes can help people to work with particular kinds of data - from tables, to maps - and can introduce ideas that users can then 'remix' to solve their particular data challenge.
Publish a roadmap - Publish a roadmap to show suggested and planned updates to a dataset, including future release cycles, planned changes to data collection, or proposed updates to data schemas and presentation.
Better metrics - Design a metrics framework for dataset use, including optionally collecting feedback from data re-users. Make sure dataset level metrics are available to data stewards.
Stack overflow for data questions - Provide a space for users to ask questions and get community-driven suggestions for datasets that might meet their needs, or approaches to analyse the data.
In the next post reflecting on the design sprint, we’ll share two user journey maps, exploring what a hypothetical user journey engaging with data could look like with, and without, the elements envisaged by these patterns.
We’d love to get feedback and reflections on the ideas in this post: you can add comments using PubPub (highlight text and add your note).
We’re still working out where we go next with developing these ideas further (beyond the current project which closes at the end of March 2022), but would love to hear about any existing (or new) experiments that explore these patterns in different ways.