Open Sessions

A list of the open sessions to submit to appears below; click on a topic title to view the abstract before you start the submission process.

Approaches to observing ethnicity and respondent origin in cross-national comparative surveys: Core measurement properties; validity analyses, coverage, comparing alternative measures.

Comparative surveys such as the International Social Survey Programme or the European Social Survey are increasingly implementing questions that assess the origin (‘migration background’) or ethnicity of their respondents. This includes ‘objective’ measures such as country of birth of the respondents or of their direct ancestors, and ‘subjective’ measures that refer to “ethnicity”, “heritage” or minority group membership. However, these concepts are not easily defined, and even less easily operationalized in a way comparable across different national contexts.

A variety of approaches for observing these concepts has by now been employed in comparative surveys. The aim of the session is to present an evaluation of the methodological performance and substantive utility of the existing instruments, mainly by conducting comparisons of data gathered with different instruments, or of the same instruments cross-nationally, across multiple rounds, or across different surveys. This exercise should facilitate further progress in the implementation of such measures in future survey rounds.

Specifically, the session invites contributions addressing the following issues:

1) Systematic reviews of the ways by which different national and cross-national surveys either addressed to the general population or to ethnic and/or migrant minorities approach the measurement of ethnicity/origin related concepts. Which dimensions related to ethnicity are measured, and how? Do they include questions on ethnic identity? Are they self-assessed measures?

2) Direct comparisons of results obtained from alternative survey measures for the same country/time combinations. This would again invite comparisons between general population and special-population surveys.

3) Methodological evaluations of existing implementations in cross-national contexts (reliability, respondents´ reactions, item non-response, etc.).

4) Coverage – how well do the existing measures work to reproduce benchmark distributions for ethnic/origin groups? What is the correspondence with official statistics; if it is low, where do the differences originate?

5) Evaluating the (relative) validity of relevant measures in substantive applications.

Harmonisation of Background Variables for Cross-national Surveys

For cross-national surveys, the comparability of information collected in different cultural contexts is a crucial point. This does not only apply for substantive questions but also for indicators on respondents’ background. Background variables (BV) allow to identify the socio-economic context respondents’ attitudes or behaviour are linked to.

To reach comparable BV in cross-national surveys, different approaches are available:
Input harmonisation is based on a central survey design and the strategy of asking the same questions in order to reach comparability across countries. Following this approach, cross-national surveys start from a source questionnaire and translate into country-specific linguistically equivalent questionnaires.
Output harmonisation, on the other hand, focuses on measuring variables using different indicators. Although these are asked country-specifically, they aim for conceptual equivalence.
A mixed mode approach combines elements from both, as by the ISSP which defines the BV, their concepts and categories in advance while there are no rules for the particular national questions.

Different cross-national surveys use different harmonisation approaches and different indicators to measure similar BV. Each approach has its pro and cons. Input harmonisation is sometimes criticized as “weakest chain approach” dealing with issues of deviating meaning of translations (e.g., household). Output harmonised approaches suffer from deviating national input variables and the challenge to appropriately cover cross-national concepts.

Each cross-national survey offers a huge amount of expertise which is rarely known. The proposed session aims to help filling this gap. We welcome papers on BV and harmonisation in cross-national surveys referring to

– General issues of BV harmonisation;
– Measurement issues of respondents’ marital and partnership status, work situation, religious affiliation, political orientation, income, geographical references;
– International standard codes (e.g., for education or occupation): use, problems, and solutions;
– Quality issues;
– Challenges by new survey developments.

Methods of Social Network Analysis
Social network analysis is the application to social research of the concept of the network — a set of entities, or nodes, connected by relationships, or ties. Conceptualisation of social structures as social networks has been fruitful in many areas of the social sciences, and has, indeed, facilitated recognition of substantive patterns and analytic problems common to the social and other sciences. One of the most lively areas of social network analysis has been the development of suitable methods for applying the network concept in social research. These methods address the three main issues: sampling, measurement, and data analysis. In each of these areas, the problems faced by network researchers are considerably, though not entirely, different from those encountered by conventional attribute-based research. This session will provide a forum for presentation of new developments in research methods for social network analysis. These papers may be theoretical, concerning epistemological problems in the use of the concept of the social network; methodological, concerning technical developments in sampling, measurement, or data analysis; or empirical, demonstrating novel applications of social network analytic methods in actual research.
Assessing the Quality of Survey Data
This session will provide a series of original investigations on data quality in both national and international contexts. The starting premise is that all survey data contain a mixture of substantive and methodologically-induced variation. Most current work focuses primarily on random measurement error, which is usually treated as normally distributed. However, there are a large number of different kinds of systematic measurement errors, or more precisely, there are many different sources of methodologically-induced variation and all of them may have a strong influence on the “substantive” solutions. To the sources of methodologically-induced variation belong response sets and response styles, misunderstandings of questions, translation and coding errors, uneven standards between the research institutes involved in the data collection (especially in cross-national research), item- and unit non-response, as well as faked interviews. We will consider data as of high quality in case the methodologically-induced variation is low, i.e. the differences in responses can be interpreted based on theoretical assumptions in the given area of research. The aim of the session is to discuss different sources of methodologically-induced variation in survey research, how to detect them and the effects they have on the substantive findings.
Cognitive processes during answering questions – challenges and opportunities in empirical research
During the last decades, various methods and at least some theoretical approaches have been developed to grasp the cognitive processes underlying the process of answering survey questions. Probably the most prominent example is the model by Tourangeau (first already 1984). This model distinguishes four stages in question answering: (1) comprehension of the question, (2) retrieval of information, (3) deriving a judgement, and (4) formulating a response. In addition, there are also models based on personality such as the satisficing-optimizing model proposed by Krosnick (1991). In this model, two groups of respondents are distinguished: those who minimize their efforts to give an answer versus those who try their best to give a good answer.
Also in empirical studies cognitive theory is used by sociologists, psychologists, linguists and other scholars to obtain a deeper understanding of language processing, the formation of attitudes or memory issues. These studies have led to considerable improvements of our surveys, as they allow for explaining different threats to the validity and the quality of survey measurement. We can also consider different pretesting methods to find out optimal wording for questions and answer options or ways to reduce bias caused by social desirability. Quantitative approaches (such as online-probing) as well as the qualitative method of cognitive interviewing have been especially fruitful to shed light on difficulties in understanding certain terms or to detect inconsistencies in the selection of answer categories. Although classical methods offer a useful understanding of cognitive processes during an interview, we have to ask whether these models are still valid when considering recent findings in neuroscience. Here advanced imaging methods measures may allow deeper insights into real-time cognitive processing.
In general, this session aims to address researchers from various disciplines who focus on cognitive approaches, methods or studies to shed light on the process of answering questions. We are open to all traditional methods (e.g. cognitive interviewing or online probing) to assess the effects of various survey design characteristics but we particularly welcome studies focusing on new methods (e.g. neuroimaging or eyetracking) to unravel the underlying cognitive mechanisms causing these effects.
What are Sensitive Topics? – Perspectives from quantitative and qualitative research

Accessing sensitive topics is an important issue for survey research as well as for research with qualitative methods like interviews and focus groups. Regarding survey research, various methods have been developed to encourage participants to respond in such surveys (avoiding unit non-response), to answer sensitive questions (avoiding item non-response) and to give honest answers (avoiding bias caused by socially desirable answers). Those methods include sampling strategies, framing and formulating items so that potentially problematic answers seem „normal“, enhancing the perceived privacy and also randomized response techniques that enable at least estimates for aggregates. However, as all those methods come with potential drawbacks, it seems reasonable to ascertain as precisely as possible in which cases such methods are really needed, i.e. which topics and items have to be considered sensitive.

Qualitative methods on the other hand have been described as both more and less suitable to deal with such topics. On the one hand, shame and privacy concerns could be more salient in qualitative interview situations. On the other hand, the trust relationship established between researcher and participant in qualitative settings is described as especially helpful for addressing private and sensitive topics. We are well aware that researchers don`t choose a quantitative or qualitative paradigm based on such considerations. However, research on sensitive topics and how to deal with them in one setting may well inspire considerations on what could be sensitive topics and how to handle them in another setting.

For this session, we particularly encourage submissions on findings about what topics are sensitive topics in which populations and for which research settings. We welcome studies using classical methods to detect sensitive questions but we particularly aim for new concepts and new methods how to identify the sensitivity of questions among different social or cultural groups for different research settings.

Between materialism and constructivism? Researching emotions from a multi-paradigmatic perspective. A joined session with the Researchnetwork 11 “Sociology of Emotions” of the European Sociological Association

Emotions and affects are increasingly the focus of analysis across a number of disciplines and professional settings, however their tacit and explicit role in both selection of methodology and interpretation of data can be overshadowed by attempts to rationalise particular choices. The question of how to empirically study emotionality in social contexts is particularly problematic as, with their language of objectivity, methodology textbooks may fail to do justice to the requirement of emotion research, suggesting new languages and techniques may be needed.

Within the field of sociology there are a wide range of approaches to theorizing human emotions (Turner, 2009). While one approach may focus on measuring emotions (Gomez Garrido, 2011), another on cultural modes of emotional expression and the normative sanctioning and regulating emotional responses (van’t Wout, Chang, & Sanfey, 2010), while a third perspective focuses on how emotions affect views and behaviours in public and political arenas (Lamprianou & Ellinas, 2019). For this session, the special position of emotions and affects between materiality and construction (Albrecht, 2017; Katz, 1999; discussion about new materialism see e.g. Ahmed, 2008; Haroway, 1995; van der Tuin, 2011), and questions what this may imply for empirical research are the focus of interest. For this purpose, we are interested in qualitative, quantitative and mixed-methods research and in methodological as well as in methodical questions. Questions arise such as: How can we as researchers, cope adequately with this “In-Betweenness” of emotions and affects? What are the best methodological tools for achieving this purpose? How do we, study and identify emotional and affective states? How can we calculate the impact of emotions as drivers of political and social change?

We propose a panel devoted to methods and emotion to enable some of these fundamental questions to be fruitfully considered in the light of the discussions that arose from the earlier RN11 sessions.

Teaching research in the post-qualitative

Much has been written about the difficulty of teaching research methods given some students’ negative attitude towards research and how to address this. This literature tends to focus on teaching quantitative methods and statistics and relatively less is said about the pedagogy of qualitative research even though the field is growing in popularity. Moreover, some researchers (see e.g., Lather 2013) argue that we have moved into the post-qualitative (also known as QUAL 4.0) where we acknowledge humanistic qualitative inquiry’s limits, that it is essentially “made up” and where new practices are becoming. Within this context Wagner, Kawulich, and Garner (2019) ask “What comes next for teaching qualitative research?” (p. 16).

This session calls for papers that consider topics on how to teach qualitative research, especially in the context of the debate on the post-qualitative, which could include the following:
(1) How do we prepare students to become qualitative researchers in the context of post-qualitative research?
(2) Sharing effective/new teaching practices for qualitative research.
(3) What are students’ experiences of learning qualitative research?
(4) How do new modes of instruction, for example, online courses and the use of technology shape qualitative research teaching practices?
(5) How do teachers assess qualitative research methods learning?

Innovations and challenges in qualitative research: A real-life approach

The session focuses on qualitative research methodology and field-work realities. Qualitative research is a riveting endeavour of delving into the complexity of social life via experiences and perceptions of social actors, meanings that they attribute to their actions and interactions as well as detailed accounts of their (everyday) lives. However, at the same time qualitative research is a complex research activity that requires the researcher to be flexible yet make sound methodological decisions in a potentially uncontrolled field environment. Next to more “common” challenges such as participant recruitment and engagement, sensitive or emotionally charged situations or involvement of vulnerable groups, qualitative research is undergoing developments related to emerging contexts and topics (e.g. research of virtual communities and spaces) as well as conversations about post-qualitative research in the literature. These developments call for innovative approaches and discussion about their application within qualitative methodology.

Therefore, the session invites researchers to reflect upon and discuss their real-life experiences and insights, challenges and possible solutions or alternatives at any stage of conducting qualitative research. Possible sub-topics may cover (but are not limited to):

(1) The role of the researcher in contemporary qualitative research and changing social reality.
(2) Adapting qualitative research for local contexts or specific communities.
(3) Qualitative research involving “vulnerable groups” and/or “sensitive” topics. How do we decide about “vulnerability” or “sensitivity”? How to manage sensitive research contexts? How do researchers deal with the pressures and sometimes stressful emotions after difficult conversations or situations?
(4) Qualitative research that involves e-technologies – both as a field of social reality and as a research tool. What ethical, data management and other issues emerge? How can we deal with them?

Same Same but Different? Response Discrepancies in Multi-Actor Surveys
While there is a common consensus that proxy reports are of low quality, especially for nonfactual questions, surveys are increasingly using a multi-actor design where data is collected from primary respondents and related persons such as parents, partners or children, but also from friends, coworkers or pupils; examples include pairfam (German Family Panel), NEPS (National Educational Panel Study), NSFH (National Survey of Families and Households) or NKPS (Netherlands Kinship Panel Study). However, it has been established that discrepancies between respondents’ answers to the same questions are likely to occur. For example, partners disagree about the frequency of conflicts experienced in their relationship, the perceived partnership quality, the share of housework each partner contributes, or even whether they use contraception or not. Children and parents give divergent answers about support and contact, about children’s school grades, and whether children have had to repeat classes. Until now, only a few studies have assessed the magnitude of these discrepancies between respondents’ answers and identified their determinants. In this session, we would like to stimulate the discussion around these issues arising from multi-actor surveys. How prevalent are the differences for dyads and triads, and for which topics? Under what circumstances are respondents’ reports more or less similar, and how do we deal with these discrepancies theoretically and methodologically? Against this background, we would also like to reflect upon theoretical approaches that systematically explain diverging perceptions among respondents.
Non-binary sex and gender in general population surveys: Current developments

In the last years, survey organisers and methodologist have become more sensitive to questions referring to respondents’ sex/gender. Until recently, the question on sex/gender was asked in a simple fashion (e.g., “What is your gender/sex”? or even “Are you…?”) with two answer categories (“male” and “female”). By changes in national laws, for instance in Australia, Germany and the U.S., a third category for intersex people has been introduced in the official registers. Therefore, survey organisers now widely discuss the implications of a new answer category for the design of this question. It is, for instance, not clear how a third category will be handled after data collection and how to deal with this question in interviewer-administered surveys. As this new answer category is defined slightly differently across countries, it poses additional challenges for international surveys.

Another issue that arises when discussing a third answer category tackles the concept of the question. Most general population surveys do not distinguish between biological sex and gender identity. In contrast, medical studies since years differentiate between these two concepts.

We invite contributions from researchers who report how they dealt with the different issues related to the question of respondents’ sex/ gender:

– The differentiation of the concepts sex and gender in surveys
– Understanding of and reactions to a non-binary response option
– Question alternatives including non-binary respondents
– Questions relating to sampling and data anonymity of intersex respondents
– The effect of question wording and/or order of answer options on response behaviour
– International comparability of data on sex/gender in light of varying legislation

Visual Design

Visual design can impact the question-answer process of respondents: It can influence how respondents understand a question, which information they retrieve, how they arrive at a judgment and which response they provide. Visual design is complex and affects multiple aspects of a questionnaire: Respondents use interpretive heuristics to “make sense” of the spacing, position, and order of the question on a screen (Tourangeau et al. 2004, Toepoel & Dillman, 2011). Visual design influences whether respondents perceive and process information. Visual stimuli in response scales (e.g., smiley, stars) and factorial design can shift the meaning of a question for the respondents. The size and number of answer boxes changes the expected answer format for the respondent.

In recent years, survey methodologists face new challenges regarding visual design. For example, mixed-device surveys and apps ask for an adaptation of previous and creation of new design principles to ensure an optimal visual design that keeps the measurement error at a minimum.

For this session, we invite presentations that investigate the impact of different elements of visual design on response behavior and response quality. We are particularly interested in submissions that address the impact of visual design elements in newer modes of data collection, such as mixed-device surveys or apps.

Purposeful Sampling

It is common knowledge that researchers can never analyze the whole world but have to select some parts of social reality. Thus, researchers have to define what cases they are interested in for their research, reflect to what field or population their cases belong to and according to what criteria they select their cases. Sampling and case selection strongly influence if and how results can be generalized as well as how data can be linked. Therefore, developing a sampling strategy is an essential part of social research. In the course of the previous debate, various sampling procedures have been developed, e.g. random sampling (Behnke et al 2010), purposeful sampling (Marshall 1996; Creswell and Poth 2018; Miles et al. 2013; Akremi 2019), theoretical sampling (Strauss/Corbin 1990), fuzzy-sets (Ragin 2000) and selection of single cases for case studies (Yin 2014; Baur/Lamnek 2017).

In this session, we want to discuss specific procedures of purposeful sampling. That involves the description of selection criteria appropriate to the research subject as well as practical issues in obtaining access to the selected cases. Which concrete purposeful sampling strategies, such as similar or most different case sampling, are used? How do researchers deal with the situation when access is denied? What effects does it have on the generalization of results or the research process as a whole?
We welcome methodological research as well as empirical examples that contribute to the ongoing discussion of typical issues of purposeful sampling, such as how many cases researchers should select and how research findings can be generalized from small or large numbers of cases.

Qualitative longitudinal research: insights into conceptual foundadtions and reflections on design and practice

Qualitative longitudinal research (QLR) has been increasingly popular over the last decade and has stimulated much methodological discussion in social research. QLR has great potential in detecting changes (or stability) over time, and associated processes as well as in exploring the perspective of the person experiencing that change. In sum, facets of change can be explored as they unfold through respondents’ interpretation captured across time (not retrospectively from one moment in time). Beside its potential, QLR is also challenging as a methodology, e.g. due to the complex data structure or the absence of analytical closure.

This session encourages methodological reflections on qualitative longitudinal research employing interviews, ethnography or a combination of multiple methods. The contributions can cover a wide range of topics, they can be of conceptual nature or reflect on research experiences.
We particularly welcome contributions addressing one or more of the following topics – without being limited to those:

• Conceptual and methodological foundations of QLR
• Change, flexibility and continuity
• Design and sampling
• Panel maintenance (e.g. incentives, staying in touch)
• Interviewer continuity
• Data management
• Data analysis and analytical closure
• Ethical implications (e.g. consent and personal involvement)
• Reflexivity and the researcher’s involvement
• Potential for secondary analysis

Challenges in Establishing Validity of Measurements in the Age of Digitalization and Globalization

In the digital and global age in which different data coming from different cultures and data sources mix up, data usage and interpretations of the results should be re-visited and re-examined. For analyzing a social phenomenon of interest, survey data can be augmented or substituted by supplementary, auxiliary data or meta-data from digital sources. Data can also come from different cultures and contexts. A relevant focus are the definition of validity and both, suitability and availability of methods to assess and evaluate validity. The questions about data usage and suitability for the provided comparisons, interpretations and conclusions should be addressed in order to turn challenges into opportunities.

The session addresses advances both in qualitative and quantitative methods that contribute to answering questions of validity when measuring concepts and constructs with different data sources and/or from different cultural contexts.

We invite researchers to contribute to this discussion by submitting either conceptual or empirical work. The papers could contribute but are not restricted to the following questions:

1. Which developments in the validation theory are important to respond to current challenges and situations?
2. What methodological innovations concerning validity of measurements can be observed (such as new ways to pinpoint the degree of validity)?
3. What are the problems in the research practice in dealing with validity of measurements? Are there typical problems due to the characteristics of specific data sources or contexts and what are potential solutions?
4. What is known about the sources of invalidity?
5. How can standards of establishing validity in qualitative and quantitative research maintained (e.g. if mixed data sources are applied)? How can qualitative and quantitative methods be applied in conjunction to each other to warrant validity of measurements?
6. What are concrete techniques for improving existing data sources (f.e. in surveys or interviews)?

What About Causal Mechanisms? Analytical Approaches Using Longitudinal Data in Life Course Research
Research questions that motivate the great majority of life course studies are longitudinal and causal in nature. After several decades of research that produced numerous cross-sectional findings, the need for studies aiming at disentangling causal effects from non-causal associations, as well as mechanisms from confounding factors is becoming increasingly recognized. Following recent developments in causal theory, new opportunities for innovative life course research are being generated by the increasing availability of complex panel data with individuals observed for several years, belonging to different birth cohorts and socio-economic contexts, affected by period effects, and being nested in multiple structures (e.g., couples, households, families, social networks, school classes, schools, countries). Going beyond the estimation of the gross (causal) effect, researchers are increasingly interested in identifying and examining the intervening mechanisms. However, given the complexity of panel data, this could be very challenging. Judea Pearl writes in “The Book of Why: The New Science of Cause and Effect” (2018, p. 153-154): “Z (…) is not a cofounder. It is known as a mediator: it is the variable that explains the causal effect of X on Y. It is a disaster to control for Z if you are trying to find the causal effect of X on Y”. The purpose of our session is to meet challenges with regard to (causal) mediation analysis using panel data (e.g., within the framework of fixed-effects models, growth curves, DAGs, multilevel longitudinal models, SEM, event history models). We invite submissions examining and testing mediating mechanisms, moderated mediation or applying effect decomposition in life-course related longitudinal studies based on either regression or SEM framework. This could be studies using the age-period-cohort specification, multiple-cohort design or multilevel design. Substantively, we are particularly interested in research on social inequalities and/or similar topics.
Natural Language Processing: a New Tool in the Methodological Tool-Box of Sociology

Large volumes of unstructured digital textual data surround us. Sociology can benefit from the analysis of such textual data in many ways. They allow the analysis of directly observable, often real-time social behavior, it can be used for longitudinal tracking, or it allows the examination of small groups. Additionally, researchers can analyze not only born-digital but also digitalized texts.

The processing of digital textual data requires new methodological tools. Natural Language Processing (NLP) employs computational techniques for this purpose. Sociologists have an important role when NLP is used in social-related topics as they can provide the domain knowledge that is necessary to link the analysis to sociological discourses by forming sociologically relevant research questions and hypotheses, defining relevant variables and interpreting the results in the larger theoretical context. However, it is important to note that these data, similarly to classic survey data, are not free of limitations concerning data-quality, issues such as representativity, internal, or external validity. Nevertheless, these challenges should not prevent us from the analysis of digital texts but make it necessary to understand how these data are born and what are their relation to classical empirical data.

The goal of the session is to present the widest possible variety of issues and ways where and how NLP can be utilized in sociology. Beyond purely quantitative papers, those using a mixed-methods approach are also warmly welcome. We invite papers that deal with the utilization of any methods of Natural Language Processing for the analysis of any significant sociological phenomenon. In order to show the potential of this approach for sociology, we invite papers that are linked to some existing sociological discourse. Additionally, we especially encourage presenters to address the issues connected to the quality of data and measurement.

‘In scientific method we (don’t just) trust’: metascience for social and behavioural research

In recent years there has been an intensifying focus on the methods, practices, conventions and norms that inform how scientific knowledge is produced in the social and behavioural sciences. Ioannidis’s 2005 article, “Why Most Published Research Findings Are False”, fired the starting gun for a slew of new research on the process of research itself. An early focus was on the use of p-values and their misuse in generating false positives leading to findings that failed to replicate. Since then, the emerging field of ‘metascience’ has begun to explore other ‘questionable research practices’ (QRPs) concerned with, for example, publishing and peer review, institutional norms and incentives, selective use of data, reproducibility and replication, preregistration of research.

More broadly, metascience is concerned with finding ways to decrease the resources wasted in scientific research whilst also increasing its quality. In doing so, it tries to answer questions such as: how do scientists generate theories? How does statistical practice support or undermine knowledge claims? Does the distinction between exploratory and confirmatory research matter? How do the norms that support research integrity vary across epistemic fields (e.g. psychology vs sociology; qualitative vs quantitative).

In this session we invite papers that contribute to this burgeoning research area. Novel empirical and theoretical contributions are welcome from all fields of social and behavioural science.

Age, cognition, and survey data quality – Methodological issues in older respondent populations

Standardized surveys are applied regularly in all different age groups and populations. Methodological research across different disciplines has repeatedly documented an effect of age-related function on response quality in standardized surveys. Age-related differences in cognitive functioning, information processing, short- and long-term memory, text comprehension, communication ability, concentration and resilience, or health states, may lead to very specific response effects and to significant limitations of the quality of obtained data. Changes in such sensory, cognitive and motivational aspects as well as different forms of institutionalisation go hand in hand with different effects of research designs and methods and give rise to age-sensitive context effects. Drawing on the total survey error (TSE) framework, survey data in age-related vulnerable groups of respondents may be subject to a variety of sources of error originating from both TSE components of representation (e.g. refusals, nonresponse, attrition, sample selectivity) and measurement (e.g. interviewer effects, context effects, acquiescence, item nonresponse).

In this session, we welcome papers dealing with all aspects of age-related data quality issues covering a broad range of topics like the following:

– Special sampling and recruitment strategies for older and/or institutionalised populations
– Sample selectivity and undercoverage
– Nonresponse error and attrition
– Restrictions regarding survey modes and mode effects
– Screening strategies of ‘interviewability’
– Tailored data collection strategies (e.g. interviewer skills and style) and methods
– Data-linkage (e.g. biomarkers, proxy interviewing or non-reactive process-produced data) to the primary survey data
– Effects of age and cognitive function on response quality (i.e. measurement errors)
– Experimental techniques researching age effects on surveys
– Differentiation between methodologically induced age differences and ‘true’ age effects
– Paradata used as quality indicators and assurance

Teaching qualitative, quantitative and mixed methods in social science programs

In most cases, study programs in the social sciences include a basic education in qualitative and/or quantitative methodology. In these courses, prospective researchers of the social sciences learn the basics for their future work. In addition, these courses provide basic knowledge that is also in demand in society outside of science and an important part of (vocational) training of the students. It is therefore important to scientifically examine methodological training in the social sciences and to reflect it among social scientists. In this context, this session aims to address the issue of university teaching for the training of qualitative, quantitative and mixed methods. The session includes both the theoretical debate on what to teach and the discussion of empirically anchored knowledge about how to teach. Those interested are invited to enrich the session with…

1. … theoretical contributions on what the aims and contents of the methodological training of undergraduate students should be. Welcome in this strand are contributions that reflect upon the traditional methodological training as well as contributions that focus on new challenges for methodological training, e.g. technical innovations, new data forms, alternative facts.
2. … empirical studies that show ways to improve methodological training at universities. In this strand there is the possibility to report on the application of general pedagogical or psychological findings in methodological training in order to demonstrate and reflect their subject-specific implementation. Especially welcome are own empirical studies on teaching in methodological training.

Repurposing: what social scientists can learn from data designed by others?
The opportunities for using data that are designed by others have increased sharply. First of all, thanks also to the development of digital methods of research, there is an increasing availability of data created for purposes others than research. For example, search engine data and social networks’ data have been employed in much social research with somehow impressive results. This is also the case for data scaled-up in data infrastructures, which come from a variety of sources (i.e. registers, administrative processes, etc.) and are made available at different levels of detail and customization. A further boost to repurposing is given by open data initiatives that are developing within institutions and governments. Moreover, secondary data analysis on survey data, and other sample data specifically designed for research, has been encouraged by open data committees worldwide. Nevertheless – even though secondary data have always been employed in social sciences – repurposing data to new objectives is not an easy task. Coverage bias, lack of validity, access constraints are just some of the research problems that may be envisaged, and that have to be critically assessed. The session specifically appreciates contributions that analyze data produced with aims others than research, as well as secondary analyses of data specifically designed for research. Contributions that address the methodological challenges that are related to reusing data designed by others will also be very welcome.
Socioeconomic Disadvantage and Racial/Ethnic Segregation in Area-Level Research

Measuring the impact of area context in social science research is of key interest for both academic and policy-related research. There are various ways that the racial and socioeconomic composition of areas are conceptualized. Some examples of indicator variables that are used to represent the racial and socioeconomic composition of a community area include:

Racial Composition
-Index of Concentration at the Extremes
-% of population residents who identify as racial minority

Socioeconomic Composition:
-Index of Concentration at the Extremes
-% of population residents without a high school diploma
-% of population residents over the age of 16 in the work force

-Per capita income
-% of population residents with a college degree
-% of population residents (total) living under federal poverty limit
-% of population residents (aged 5-17) living under federal poverty limit

-difference in male and female wages

-hardship score

Researchers often want to contextualize a level of influence that is thought to effect the outcome, yet there is often no standard for that measure’s inclusion. For example, including the “racial composition” of an area might mean including the proportion of that neighborhood that identifies as a minority, or something else.

Moreover, because indicator measures are rarely compared to each other, it is difficult to know whether observed effects are the result of the actual construct that under investigation, or rather the variable itself.

We solicit submissions that examine the variety of measures representing racial/ethnic and socioeconomic composition of areas on outcomes.

Survey Recruitment on Social Media

Social media platforms present unique opportunities for recruitment in survey research. The ease with which survey opportunities can be shared with potential participants, combined with the low cost for doing so means that social media platforms can be a viable option for recruitment. For individuals attempting to recruit participants from hidden populations or those in geographically remote areas might be tempted to conduct survey recruitment entirely on social media platforms. Yet, there is much that is unknown about what comprises “successful” survey recruitment campaigns on social media. Some questions that remain:

1). How well does social media recruitment work for hidden populations?
2). Is paid “reach” worth it? Should paying for signal boosting be added to grant budgets for recruitment?
3). Which platforms are best for survey recruitment? What are the relative benefits to Facebook, LinkedIn, Twitter, Instagram?
4). What is the impact of current events on social media survey research recruitment?
5). What is the quality of survey data for recruitment that conducted on social media?

We solicit submissions on survey research where participant recruitment was completed on social media platforms, particularly those that can compare to more traditional recruitment strategies.

Nonlinearities in the Intergenerational Mobility and Income Inequality
Recently, there have been a number of studies that provide evidence of nonlinearities and parameter heterogeneity across different socioeconomic classes in the intergenerational mobility mechanism. This evidence is consistent with the presence of multiple equilibria and poverty traps that define environments in which families exhibit highly persistent poverty across generations. This session focuses on the intergenerational persistence of economic outcomes with special emphasis on the nonlinear role of neighborhood effects, credit constraints, and genetics.
International Journal of Social Research Methodology panel session: Who owns data? Big data and democratisation

Democratisation and big data have established themselves as key developments in social research processes, but they raise important questions about the extent to which are they opposed, and who owns data?

At the micro and meso level there has been a gratifying move toward community led research that asks questions relative to that community and the community has some control over the data generated. But at the same time a massive investment in computing capacity and technological skills has demonstrated the potential of big data, generated through social media or socio-economic transactions, to provide important macro level insights. But these data are often ‘owned’ by commercial interests, or they are analysed by scientists from a computing background, who have no training in social science. The ‘questions asked’ of these data are either trivial or, it is often claimed, serve narrow ideological or commercial interests.
Furthermore, there is a debate about what might be called ‘the missing middle’ – sample based survey research using large, or relatively large samples. Savage and Burrows (2007) argued that this kind of research would play a lesser role in the future and would be superseded by research using ‘big data’. Others, such as John Goldthorpe (2016), have argued that survey research is far from dead.

The methodological possibilities and challenges of a pluralist combination of all three of these manifestations of social research is exciting, but issues of power, skills, data ownership (primarily in terms of the rights of individuals to control their information versus the entitlements of organisations, governments, and companies) and data vulnerability (e.g., privacy, identity theft, etc) may undermine such possibilities, with ‘big data’ research owned and commissioned by rich and powerful interests, leaving under resourced methodologically limited research as the only possibility for democratising research.
In what significant ways do these methodological conflicts and divides exist?

And, given these various arrangements, what are the challenges for social research?

Who owns the questions, the research and the data and who should own the questions, the research and the data?

Is the requirement for sophisticated analysis skills a barrier to democratisation of big data and large scale survey data?

Contemporary issues in longitudinal and panel data analysis

Longitudinal and panel data are used to study change within the units of interest (e.g., persons, groups, organizations, or countries). However, researchers may face several difficulties when working with panel data. For example, the identification of within-unit effects and its separation from stable variation between units as well as the correct specification of the direction of effects is a central issue for causal modeling. However, in many cases the issue of reversed effects is neglected. Furthermore, a prerequisite for valid analyses and comparisons of substantive constructs across time is that the measurements are actually comparable (i.e., equivalent) across occasions. However, often this prerequisite is only assumed but not tested.

The methodological and statistical literature offers powerful tools to specify panel models that separate within- and between-unit variability and account for reciprocal causation (e.g., structural equation models with fixed effects) and to test the equivalence of measurements across time (longitudinal confirmatory factor analysis). This session aims at presenting studies that address particular analytical and conceptual problems associated with the broad issues outlined above. For example:
• Causal modeling of panel data
– Sequential exogeneity and reciprocal causality
– Lagged dependent and lagged independent variables
– Flexible specification and comparison/test of alternative models
– ML/WLS estimation and handling of missing data
• Measurement equivalence in panel data
– How to deal with nonequivalence? Are small deviations tolerable?
+ Bayesian approximate methods, alignment method
+ Partial measurement equivalence
– Sources of longitudinal measurement nonequivalence?
+ Systematic panel attrition
+ Life-course events
+ Developmental processes

We welcome presentations that apply one of the methods mentioned above (or associated ones) and approach one of the issues related to panel data either using empirical data (1), and/or taking a methodological approach for example by using Monte-Carlo simulations.

Analyzing unstructured data: video, audio, images, text
Additional Open Session / Abstract not currently available
Triangulation and Mixed Methods
Additional Open Session / Abstract not currently available
Coding & Analysis of Unstructured Data
Additional Open Session / Abstract not currently available
Experimental Methods in the Social Sciences (e.g. recruitment, ethics, designs)
Additional Open Session / Abstract not currently available
Unfolding and IRT (e.g. estimation, sample sizes, model fit etc)
Additional Open Session / Abstract not currently available
School, Work and Occupational studies: methodological challenges
Additional Open Session / Abstract not currently available
Measuring social and political sentiment and emotions
Additional Open Session / Abstract not currently available
Monitoring Offensive and Hate Speech Online
Additional Open Session / Abstract not currently available
Researching older populations
Additional Open Session / Abstract not currently available
Researching immigrant populations
Additional Open Session / Abstract not currently available
Research Methods / Empirical Research and Society (e.g. educating politicians, the Press and the public to become good consumers of research findings)
Additional Open Session / Abstract not currently available
Preparing primary and secondary education students to become research-oriented citizens, i.e., citizens who expect to get answers from research, rather than from other agents such as populism, religion etc.
Additional Open Session / Abstract not currently available
Societal Complexity and Simulation

The world becomes increasingly complex. We are faced with all kinds of threats, and at the same time new challenges pop up that demand immediate attention. Policy makers have to find answers for these threats and challenges.
This is very hard to do. Policy makers need the support of science methodology to analyze complex real life problems and to be able to handle these.
The field of methodology of societal complexity developed for this a holistic methodology: the Compram methodology. Handling societal complexity, beyond System Dynamics (MIT) and the Soft Systems Methodology (Checkland). Integrating approaches of natural and social sciences into a systemic interactive approach for policy making. In collaboration with experts and actors in order to find possible transitions of the situation that can be mutually accepted and implemented. Taking into account real life knowledge, emotion and power. Combining advanced tools and methods such as computer simulation for analyzing and handling societal complexity. With a history of series of publications in which is reflected on complex societal problems like floods, hurricanes, water problems, urban planning, climate change, credit crisis, economic problems, health care problems, problems due to agricultural industry and terrorism. Referring to advanced approaches of system theory, computer simulation and non-linear modeling.

For this special session on Societal Complexity and Simulation colleagues from different disciplines (social sciences, natural sciences, life sciences, system dynamics and operational research) and fields (such as healthcare, education, agriculture traffic, water management) are invited to contribute to such a methodology. Doing research about complex societal problems, when applicable using computer simulation. Reflecting on it in a theoretical and methodological way. Demonstrating that scientific understanding, knowledge, methodology, expertise and skills helps the society to handle their complex societal problems.

General Session
Please choose this session if you are not sure where your abstract fits best. The organizers of the conference will either assign these abstracts in existing sessions or will create new sessions to accommodate all the submitted abstracts. If you have any questions, please contact the organizers and we will be happy to provide you assistance.