T14P03. The Data/Sensor Revolution and Public Policy

Topic : Science, Internet and Technology Policy

Chair : Jouke de Vries (RUG/Campus Fryslân)

Second Chair : Sarah Giest (Leiden University)

Third Chair : Reuben Ng (Lee Kuan Yew School of Public Policy)

Share to Facebook Share to Twitter Share to Linkedin Share by mail

General Objectives, Research Questions and Scientific Relevance

One of the central assumptions in theories on decision- and policymaking has been that there is not
enough information to take the best possible decision. The psychologist and Nobel Prize winner
Herbert Simon stated that decision-making is never 100% rational, because rationality itself is limited.
Rationality is bounded due to the limited capacities of human intelligence, brain dysfunction, and all
kind of difficulties within the political and administrative system. The solutions within a complex political
and administrative system are thus suboptimal, which is why it is difficult to solve complicated societal
issues. This bounded rationality assumption became dominant in theories on decision- and
policymaking in political science and public administration. Decision- and policymaking was no longer
rational-synoptic, but was more incremental and political.

The assumption of bounded rationality has led to different theories on decision- and policymaking. The
rational-synoptic model was succeeded by the theory of incrementalism. According to this model, the
essence of decision-making was to take small steps. Of course empirical and theoretical work led to a
synthesis between the rational decision-making model and incrementalism: the model of mixed
scanning.

At the end of the twentieth century, new models on decision- and policymaking received more
attention. These new models were based on chaos and complexity theory from the natural sciences
and theoretical biology. Based on the assumption that planning of decision-making is difficult due to
relations no longer being linear, coincidence became a crucial element in explaining processes of
decision-making. Two models shaped this development. The first one is the work of John W. Kingdon
on political agendas. Kingdon makes a distinction between three streams: societal problems,
alternatives and politics. Only when these three streams overlap can there be fundamental decision
making. The second model is termed the punctuated equilibrium model. (Baumgartner & Jones)Most
of the time political and administrative systems are confronted with stability, yet sometimes the
decision-making process becomes more turbulent. This border between stability and turbulence is the
punctuated equilibrium.

The process of digitalization is changing the dynamics related to decision- and policymaking:
information is no longer scarce in society and in political and administrative systems. To the contrary,
data are everywhere now. Decision makers are no longer confronted with a lack of information, but
rather with an endless sea of information and data. This development will continue because of new
developments in the IT-sector: nanocomputers, the Internet of things and artificial intelligence. Many of
these developments are discussed with the term Big Data Revolution.

As a result, the notion of limited rationality is debatable nowadays. If this central assumption is no
longer correct because of the Big Data Revolution, this must have consequences for different theories
that have been dominant in political science and public administration for a long period. The central
question of our panel is: What are the consequences of the Data and Sensor Revolution for decisionand
policymaking, both theoretically and empirically?
This general question leads to different partial questions:
- What are the consequences of the Big Data Revolution for theories on decision- and
policymaking?
- Is it possible to incorporate the consequences of the Big Data Revolution into decision- and
policymaking models?
- What are the consequences of the Big Data Revolution in the daily practice of political and
administrative systems?

Call for papers

The Big Data Revolution challenges some of the assumptions made in the field of decision- and
policymaking connected to bounded rationality. Decision makers are no longer confronted with a lack
of information, but rather with an endless sea of information and data. Under these circumstances,
new questions arise that have consequences for different theories dominating political science and
public administration. This panel wishes to examine these challenges for decision-making processes
and political administrative systems. Our focus is on novel theoretical and empirical perspectives
moving the field towards identifying and incorporating the consequences of the Big Data Revolution in
these processes.

The panel calls for papers that address the dynamics of decision- and policymaking in the context of
the big data revolution. Submissions can cover a wide range of topics connected to decision-making
models, data-aided decision support, evidence-based policymaking or the digitization of
administrations. Addressing the leading question of what the consequences of big data will be for
decision- and policymaking, the papers can offer methodological developments based on big data,
case studies of data-driven decision-making as well as challenges of incorporating this type of
information into daily public practices. We welcome both theoretical and empirical papers. Of course
the combination of both theory and empirical research is also possible.

ROOM
Block B 4 - 4
Fri 30th
13:45
Session 1

Discussants : Sarah Giest (Leiden University)

Data-Driven Innovation as a Strategy : Towards Responsible Innovation and Adaptation for Humanitarian Response and Sustainable Development

Thomas Baar (Centre for Innovation (Leiden University))

Jos BERENS (Leiden University, Centre for Innovation)

The unprecedented availability of large-scale data is profoundly impacting the development and humanitarian sector. Governments, companies, researchers, civil society and other organisations are actively experimenting, innovating and adapting to the new opportunities associated with new types of data and methodologies. Particular focus is brought in these fields as to how data-driven innovation could fill critical information gaps and therewith support organisations in their strategy and operations as well as policy development. This drive for innovation has been supported in various official statements (i.a. UN Data Revolution, 2014), shared commitments (i.a. Grand Bargain, 2016) and through the establishment of new partnerships (i.a. Global Partnership for Sustainable Development Data and the Global Alliance for Humanitarian Innovation). Nonetheless, development organisations and humanitarian actors themselves tend to have limited aptitude for conducting data-driven innovation as they often lack required / necessary resources, capacities and knowledge (Baar et al, forthcoming). This leads to the current state of affairs, wherein data-driven innovation within these sectors is still usually facilitated and driven by external actors.

 


At the moment, we see two critical obstacles to the success of data-driven innovation for humanitarian response and sustainable development. While there is plenty of experimentation happening, little is know about the potential risks of many of these new tools and methods (Lepri et al., 2016). Especially for longer-term risks, more research is needed to prevent this sector from deploying tools that eventually might harm the populations the sector aims to serve. Risks include the exposure of sensitive (personal) information; false information being generated by lack of training or mal-intent; and secondary risks including loss of target population’s trust, legal liability for aid agencies, and other negative consequences. This hence brings forth a certain ‘data responsibility gap’ to data-driven innovation. Simultaneously, data-driven innovations receive limited uptake by humanitarian and development actors. This ‘adaptation gap’ originates from: (1) a lack of awareness of the potential of new types of data-driven innovation and how to realise it; and (2) a disconnect between the design of tools and methods, and the needs and desires of their end-users; and (3) the required investment for adequate implementation of data-driven innovation in existing processes.

 


On the basis of two case studies, we will define three core principles for designing data-driven innovation in the humanitarian and sustainable development sectors of data-driven tools and methods. Firstly, we will reflect on the realisation of a data platform to test core assumptions underlying the work of Aids Fonds and its partners by combining and analysing various data source to answer strategic questions. The second study focuses on the development of a forecasting model on the basis of standardised consumption data to support demand and order planning as part of the operations of Médecins Sans Frontières/Doctors Without Borders (MSF). We will show that leveraging the potential that new types of data and methods have to offer, requires: (1) strong collaboration with end users and others with a vested interest in their deployment and use during the innovation process; (2) an assessment of potential data responsibility challenges to inform the design of data-driven innovation; and (3) ensuring that data-driven innovation is not perceived as a product, but rather as a strategy including training in use of the tool or method to ensure adaptation into current operations and responsible use.

Responsive image
PDF
To What Extent the Grand Lyon Metropole can harness the Smart Meter Project towards the Governance of Territorial Climate Energy Plan (PCET) Study case: Smart Electric Lyon project initiated by EDF [French Electric Utility Company]

WAHYUDDIN YASSER (EVS-RIVES, ENTPE)

In 2012, EDF officially launched smart meter experimentation project in Lyon Metropole area. The project established a consortium named Smart Electric Lyon (SEL) brought in around twenty industrials in energy sector, electrical home automation, information and communication, and supported financially by ADEME ( the French environment and energy agency).

 

Technically, the main purpose of SEL is to bring the solutions that are being tested using a new smart meter equipment sensor named “Linky” installed in 25,000 homes and 100 businesses. SEL offers information services, technical solutions, and new tariffs to help the consumers to better manage their daily electricity consumption, which are associated with Linky.

 

These new advanced features involve the emergence of a new type fine-grained data of customers’ energy consumptions at the level of households or industrial units which is captured automatically by sensor Linky in a real time basis. The data is bi-directionally unfolded and communicated in order to be available for public and private urban managers and also for the customers themselves. In doing so, sensor Linky establishes new data sources beyond traditional methods, censuses, questionnaires, and registries [Desroisiers, 1993], frames what many authors called “data revolution” [Kitchin 2014, Townsend 2013, Cukier and Mayer-Schoenberger 2013] which allows both the traceability and the interoperability of the very fine-grained data of people pattern behaviors [Boullier 2015, Lupton 2014]. 

 

Thus, this paper aims principally to determine the consequence of the Big Data Revolution in the daily practice of political and administrative systems. But, first and foremost we tend to apply this leading question within the urban governance issues, whether such big data revolution promotor like SEL could possibly employed as an instrument for the urban managers (notably in the PCET program of Grand Lyon) as enlightened by Margetts and Hood as “detector and effector” of urban governance in the digital era [Margetts & Hood, 2013].

 

A profound and intense observations, which are empirically conducted closely within the Grand Lyon authority, SEL consortium, its instigators, and its first enforcers, had been constituted as our primary source to construct this paper. A cross-actors investigation allow us to grasp the dynamic of SEL involvement in the industrial and also public governance. It is completed by documentary analysis, and ethnographic sequences of observations notably realized within the showroom of SEL.

 

The results of our research shows that Grand Lyon possesses an auspicious ecosystem for the private sectors like EDF to promote smart meter project. SEL has been seen as a successful achievement bearing in mind that SEL has currently become a national standard reference of smart meter. It is a powerful instrument for EDF in respect of market electricity liberalization and a means to readjust the company’s market strategy. For the Grand Lyon, the impact of big data revolution to their political practices is still vague, as incorporating the systems to PCET policy remains the subject of power struggle between Grand Lyon and the existence of multi-layers’ stakeholders, public, private, and intra-national agencies.

 

Keywords: Lyon ecosystem, SEL, Linky, PCET, Data Revolution, Urban governance

 

Responsive image
PDF
Institutions and temporal dynamic of policy change: empirical evidence from the Structural Topic Model (STM) analysis of development policies in Asia.

Maria Stella Righettini (University of Padova )

Stefano Sbalchiero (University of Padova)

In the vast literature on how to identify temporal patterns of policy development, the identification of turning points in policy processes over time and their explanations are at the core of most analytic models and empirical investigations (Howlett, Reynard, 2006). A number of studies have emphasized both the manner in which actors and institutions can promote change as well as stability. Following the punctuated equilibrium model, we consider change in policy trajectories as “outgrowths of earlier trajectories” (Baumgartner and Jones, 2002) driven by actor preferences (Buthe, 2002). Both stability and change emerge from stable or changing preferences. The real challenge for empirical investigation is to identify where, at what level, how and with which documentation ascertain such stability or change, especially when actors perform at different levels (supra national and regional). Our research for this paper focuses on interactions between different institutional actors with different resources in the policy processes in order to detect dynamics at different levels of governance and in different element of development policies in Asia. The Asian Development Bank (ADB) conceived in the early 1960s as a financial institution that would be Asian in character and foster economic growth and cooperation in one of the poorest regions in the world. ADB assists its members, and partners, mainly in Asian countries, by providing loans, technical assistance, grants, and equity investments to promote social and economic development. Drawing upon a database of 1983 titles and descriptions of projects financed from 1990s until 2016, we identify the policy fields covered by the ADB over time in a vast geographic area. We use Latent Direchlet Allocation (LDA) and the Structural Topic Model (STM) to identify the key topics/policies of then ADB financing agenda over the time, identifying topics with  significant increasing and decreasing trend of attention. We use “Hot and cold policy topics” (Griffiths and Steyvers, 2004), a procedure which describes how another metavariable (time) can be used to explore the corpus. We test whether STM facilitates sequencing analysis in the policy processes over time and space and whether it is possible to match different levels of change: macro (that of systemic and strategic development goals); meso (that of policy sectors objectives (topics) in which changes occur over time and space) and micro (that of operational setting, policy tools and instruments adopted in the policy programs examined) (Howlett, 2009). In the frame of the topic analysis of large database, this study explores the opportunity of assessing the evolution of development in terms both of geographical and temporal mapping. Our findings assess the usefulness of LDA and of STM in the analysis of co-occurrence of change at different times and levels of policy processes.

Responsive image
PDF
Harnessing the Deluge and Drought of Text Data for Policy Analysis: An Ontological Approach

Chetan Singai (National Law School of India University )

Thant Syn

T. R Kumara Swamy (National Institute of Advanced Studies)

Ajay Chandra (National Institute of Advanced Studied)

The advent of the internet and the electronification of documents has created a deluge and a drought of text data for policy analysis. Easy accessibility of almost any data that are available has created the deluge. Easy means of collecting a variety of data and making them available has added to the deluge. The deluge has also made it difficult to separate the signal from the noise, best exemplified by the ratio of relevant to irrelevant items in a Google search result. The overwhelming quantity of irrelevant data often creates a drought of relevant data – there is woeful data poverty amidst a mirage of plenty.  The tension is exacerbated and perpetuated by the ‘herd’ effect of ‘more of the same’ in policy research, formulation, and practice. It is positively reinforcing and convenient for the researchers, analysts, and practitioners. It may also be dysfunctional. The situation can be corrected only by studying the ‘big picture’ of a domain to determine: (a) its ‘bright’, ‘light’, and ‘blind/blank’ spots, and (b) the antecedents and consequences of the differences in emphases. (Ramaprasad & Syn, 2015) A ‘bright’ spot may be heavily emphasized because it is important or is easy; a ‘light’ spot may be less emphasized because it is unimportant or is difficult; a ‘blind’ spot may have been overlooked, and a ‘blank’ spot may be unfeasible.

We present a method of harnessing the deluge and drought of text data for policy analysis using an ontology, with an example of envisioning an emerging urban public university created by the trifurcation of an existing university. The method is like that used to map the state of higher education system in a state in India and the universities in Chile. (Coronado, La Paz, Ramaprasad, & Syn, 2015; Hasan, Ramaprasad, & Singai, 2014)

There is little systematically organized data about the emerging university and its parent. We collected and collated a large volume of data about the constituent units of the emerging university by using (a) some of the authors’ knowledge of the domain and sources of data, (b) online search capabilities, and (c) personal knowledge of a few experienced people. The data while not complete is perhaps the most comprehensive about the university. They were mapped onto an ontological framework, like the one used in earlier studies, to discover the ‘bright’, ‘light’, and ‘blind/blank’ spots in the emerging university. The ontological maps at different levels of granularity were used to recommend a strategic plan for the university.

 

Coronado, F., La Paz, A., Ramaprasad, A., & Syn, T. (2015). Navigating the Complexity and Uncertainty of Higher Education Systems: Ontology Mapping of Chile’s Universities Proceedings of HERDSA 2015. Melbourne, Australia.

Hasan, T., Ramaprasad, A., & Singai, C. (2014). Rethinking Higher Education Research: Ontology Mapping of Higher Education Systems Proceedings of HERDSA 2014. Hong Kong.

Ramaprasad, A., & Syn, T. (2015). Ontological Meta-Analysis and Synthesis. Communications of the Association for Information Systems, 37, 138-153.

 

Responsive image
PDF
From Dots to Distributions: Why a Statistician’s Approach to Big Data Matters

Jason Kok (Autoriti Monetari Brunei Darussalam)

This paper examines a practical challenge in using data for policy whereby we may have the evidence for evidence-based policy but we are still unable to make the right decisions. It is argued that this is because, despite the emergence of Big Data, the importance of understanding statistical concepts by analysts/researchers and policymakers is still underappreciated. The process of analysing the evidence to produce research that informs policymakers implicitly relies on the understanding of statistical concepts to appreciate the limits of conclusions presented in research. There is an overemphasis on “dots” such as the mean/median which give a point value for forecasts, expected effects, etc. without an appreciation of the “distribution” of uncertainty and possible outcomes around this “dot”. The paper talks about basic statistical concepts that still doggedly persist and result in misunderstandings on the conclusions of research including sampling, random variables and moments of a distribution. Basic statistical concepts and theory common in almost every undergraduate/high school statistics textbooks are used, highlighting a serious practical issue of analysts/researchers in trying to convey their research to policymakers in an understandable manner. A number of real-life examples are used to illustrate that this is a real problem even in developed economies. The paper concludes by discussing the author’s views on why this problem persists and how it may be addressed.