09 May 2025

British Journal of Management (BJM) Special Issue Call for Papers: Generative artificial intelligence for future decision-making boundaries: Establishing fresh theories and praxis

A Special Issue Call for Papers for BAM's British Journal of Management (BJM)

 

Banner - BJM 2020 (002).jpg

Special Issue Call for Papers

Generative artificial intelligence for future decision-making boundaries: Establishing fresh theories and praxis

Paper submission window: 1 to 31 December 2025

 

Special Issue Editorial Team

BJM Consulting Editor:

Academic rationale

Generative artificial intelligence (GAI), a specialised branch of AI, propels the traditional machine learning forward by creating new and original content derived from acquired knowledge, demonstrating proficiency in crafting patterns across a spectrum of formats, including text, visuals, and sound (Brown et al., 2024; Grimes et al., 2023). While conventional AI confines itself to the analysis of pre-existing data, GAI transcends this limitation by producing fresh content based on the patterns by which algorithms have learnt and assimilated. This innovative capacity opens a realm of apparently boundless opportunities, positioning it as the forefront of technological advancement. GAI has shown increasing potential to replace human decision-makers in various jobs such as strategic decision making (Doshi et al., 2025), marketing and advertising (Cillo and Rubera, 2024), human resource management (Budhwar et al., 2023) and banking (Moharrak & Mogaji, 2024). GAI applications are already revolutionising traditional management and leadership paradigms. For example, CoPilot is embedded into Microsoft Office 365, assisting leaders by summarising meetings, generating action items, and even forecasting project outcomes based on data inputs. DeepSeek is being used by managers to draft reports, analyse feedback, and generate training content imminently, reducing reliance on administrative support and enhancing management agility.

It should be highlighted that previous scholarly efforts examining AI’s influence on management and leadership are predicated on the premise that AI functions as a supportive tool in managerial ‘white collar’ processes, including aiding in problem-solving, decision-making, and executing tasks necessitating human-level intelligence with the involvement of human oversight (Raisch & Fomina, 2024). The recent conversations in BJM highlight that, in the age of GAI the linkage between technologically driven and human-driven behavior is becoming increasingly blurred (Aharonson et al., 2025; Brown et al., 2024). Such blurredness is more evident in the domain of decision-making (Raisch & Krakowski, 2021; Shrestha et al., 2019). Unlike traditional AI, which is largely deterministic, predictive and rule-based, GAI is creative, conversational, adaptive and context-aware, and can generate new content, ideas, and even strategies based on probabilistic modeling of language and behavior (Doshi et al., 2025). Tools like ChatGPT, Gemini, and CoPilot are not merely assisting with decisions. They are participating in the ideation and framing of decision problems themselves. A range of decision-making paradigms are enabled by GAI, such as augmented decision-making, where GAI enhances human judgment (Herath Pathirannehelage et al., 2024); hybrid decision-making, where human and GAI collaborate dynamically (Raisch & Fomina 2024); and autonomous decision-making, where GAI systems operate independently (Wong et al., 2023). Consequently, decision-making boundaries of organisations in the context of GAI are also increasing blurred, which opens four important questions.

First, the human-machine relationship bridged by GAI will largely affect management structures, organisational hierarchies, and task ownership in organisations (Bechky & Davis, 2025). The question is “are traditional considerations of transparency, explainability, visibility, accountability and responsibility of decision-making still valid in the age of GAI?”
 
Second, the field is currently attempting to uncover the pivotal dilemma confronting many GAI users: framing relationships between humans and GAI (Brown et al., 2024). This revelation propelled us to delve deeper into the crucial question of “what constitutes GAI’s optimum boundaries in different contexts of managerial decision-making?” Designing optimised boundaries incorporates not only mathematical calculations of costs and returns, but also the psychological balance of human actors in organisations.

Third, “how should the optimised boundaries of decision-making be categorised?” For instance, human-GAI decision-making boundaries may be determined by levels of decision-making (i.e., strategic, tactical, or operational), by functional area of decision making (e.g., operations, HR, or marketing), by levels of risks and security concerns, or by levels of complexity or time consumed by tasks (e.g., Chou & Cho, 2023; Wong et al., 2023). Given that GAI tools enable new forms of sensemaking, such as synthesising unstructured data, generating alternative scenarios, and simulating potential outcomes with unprecedented speed and nuance. These capabilities challenge traditional assumptions about who decides, how decisions are framed, and what information is considered authoritative. In this way, organisations must now rethink decision locus, accountability, auditability, and control systems in the age of GAI, which require distinct considerations in terms of trust, explainability, and organisational governance.

Fourth, GAI development alters the basic premise of traditional theoretical foundations in management, which argues that “the vocabulary of administrative theory must be derived from the logic and psychology of human choice” and “administrative theory must be concerned with the limits of rationality, and the manner in which organisations affect these limits for the person making a decision” (Simon, 1976, pp. 45-46). Hence, the question is: “are traditional theoretical constitutions still relevant to explain GAI decision-making, or do management scholars need to develop new theories and frameworks to explain and predict GAI phenomena?”

The inspiration for this Special Issue stems from these important questions and the profound research projects conducted by the Special Issue editorial team on technological anxiety surrounding GAI. We advocate that to understand the optimum boundaries of decision-making in the age of GAI, foundational assumptions underpinning traditional management theories need to be challenged and updated. Although research is beginning to address this changing landscape and exploring the implications of the new challenges in the context of AI (Bechky & Davis, 2025; Borges et al., 2021; Brown et al., 2024; Chowdhury et al., 2024; Doshi et al., 2025; Dwivedi et al., 2023; Haefner et al., 2021; Hamilton & Sodeman, 2020), the recent conversations in BJM suggest that the field is currently lacking robust theoretical foundations. Hence, there is a need to develop, extend, adapt, and evolve theoretical frameworks (Brown et al., 2024), considering evolution of GAI and its potential implications for the future of management.

Given the rapid pace of development in GAI in recent years, this Special Issue aims to provide a rich knowledge exchange forum, deeply appealing to the academic community. The objective is to attract papers that clearly question, define and explain the optimum boundaries of GAI’s augment, hybrid or autonomous decision-making contexts in organisations. We invite scholars actively engaged in this frontier field to submit empirical papers, with the purpose of enhancing the depth and breadth of current understanding of the phenomena, augmenting existing theories to be relevant to GAI, suggesting new theories and conceptual frameworks, and providing novel ways of studying various types of decision-making stemming from the use and adoption of GAI, while introducing novel insights and outlining a clear roadmap for further investigations. We emphasise that this topic area has the potential to be examined from integrating multiple perspectives (e.g., marketing, HR, operations management, EDI and gender studies, and compliance frameworks) to offer novel explanations contributing to multi-disciplinary perspectives.

Special Issue main themes

This Special Issue invites papers that are original and novel in the research field of GAI, rather than AI. While AI is a broader and more mature research area, this special issue is intended to advance scholarly understanding of the novel implications, tensions, and opportunities arising specifically from GAI. Papers addressing traditional AI without a generative component will be considered out of scope.

Given the fast adoption of GAI in businesses (Brown et al., 2024), the editorial team would particularly welcome empirical papers. We expect that 6-10 papers will be included in this Special Issue.

We expect that the Special Issue will include papers that address (but are not limited to) the following research themes and also at the intersection of these themes being integrated:

a. Individual level 

  • What happens when GAI-generated decisions conflict with human intuition or ethical values?
  • How does GAI affect the cognitive load, mental models, and sense of professional identity of managers?
  • What are the conditions under which managers should be willing to entrust and authorise GAI systems to formulate and implement independent decision-making?
  • What psychological elements motivate the delegation of managerial decisions to GAI, and why?
  • Which leadership competencies will continue to be vital for management teams in line with GAI enabled decision-making and why?
  • How and at what level do risk management decisions impact the human-GAI relationship and the performance of GAI enabled managerial decisions? 

b. Organisational level

  • How do organisations prepare for moral dilemmas arising from unforeseen AI decisions (e.g., in crisis)?
  • How might different levels and forms of deployment of GAI redefine the operational and structural boundaries and hierarchy of organisations?
  • How should the optimum boundaries of GAI decision-making be defined?
  • Which aspects of managerial decision-making are most likely to be delegated to GAI, and what implications might this have for the organisational culture and overall performance of the organisation?
  • How and in what context can poor design of decision-making boundaries between human and GAI affect the performance of organisations?
  • How can organisations design their cognitive frameworks and data management policies to facilitate GAI-driven management systems across various departments?
  • How does GAI decision-making transform the essence of organisational decision-making in different aspects of the organisation (e.g., strategic management, HRM, supply chain management, and innovation management)?

c. Community level 

  • How can we enhance our understanding of the legal and ethical considerations that influence the adoption of GAI enabled decision-making?
  • How do cultural, national, and institutional differences influence the deployment and acceptance of GAI-enabled decision-making?
  • Are Western models of autonomy and accountability compatible with GAI deployment in non-Western management systems?
  • How should the considerations of transparency, explainability, visibility, accountability and responsibility of decision-making be redefined in the age of GAI?
  • Are conventional theories still relevant to explain the optimum boundaries of GAI decision-making? What should be the alternative theoretical underpinnings?
Developmental sessions

The editorial team will organise a series of events and feedback sessions. 

Firstly, the editorial team will organise an inclusive Special Issue online launch event in collaboration with the BAM Special Interest Groups (Strategy SIG, Organisational Transformation, Change and Development SIG and Leadership SIG) (led by Professor Qile He and Professor Maureen Meadows and supported by all the editorial team members). This event will introduce the SI, clarify SI expectations, address questions from potential authors and call for quality submissions. A number of AI experts, as keynote speakers, will also be invited to introduce and discuss the topic. The online event will be open to BAM members and attendees worldwide. This launch event will be disseminated through BAM website, BAM newsletters, editorial team members’ home institution websites and relevant social networks.

Secondly, a hybrid paper development workshop (PDW) will be organised 3-4 months after the launch event hosted by the University of Derby. Aspiring authors will be asked to submit an extended abstract and present their planned submission at this PDW. This PDW will allow potential authors to discuss and engage with the editorial team. Although attendance to this PDW is not a precondition to submit to this Special Issue, potential authors are encouraged to submit their extended abstracts to this PDW and receive guidance to prepare their manuscripts for submission. Invitation to this PDW will also be disseminated through BAM website, BAM newsletters, editorial team members’ home institution websites and relevant social networks.

Thirdly, an optional exclusive and individualised online feedback session will be organised for authors of the manuscript, which received invitation for revise and resubmission. Authors will have the opportunity to receive more detailed feedback from the editorial team about how to improve their manuscripts for successful completion of the work. For this session, the editorial team will reach out to the authors with details of the optional online session. However, authors need to bear in mind that this session is for developmental purposes only and will not guarantee final acceptance of the paper. Participation in this session does not replace formal peer review but will complement it only.

Special Issue editor short biographies

Professor Qile He
Qile He, PhD, is Professor of Strategy and Performance Management at the University of Derby, UK, and the Chair of the College Research Committee. His research interest lies in strategy and inter-firm relationships, as well as innovation strategies and processes of organisations. His recent research is centred on application of AI in the context of supply chain collaborations and supply chain optimisation. He has published over 100 papers in refereed journals, books, and leading international conference proceedings, including prestigious international journals, such as British Journal of Management, International Journal of Production Economics, European Management Review, Technology Forecasting and Social Change, International Journal of Production Research, and Supply Chain Management: An International Journal, Production Planning & Control. He serves on the editorial boards of three international journals, including the field leading International Journal of Production and Operations Management (CABS4*) and successfully guest edited a special issue for the International Journal of Management Reviews. He reviews for over 20 journals including renowned 3*/4* journals. He also reviewed for EU Horizon, Leverhulme Trust, as well as Marsden Fund, Royal Society Te Apārangi, New Zealand. He is a receiver of the Best Reviewer Award from Emerald Literati Network. He is a Senior Fellow of the Higher Education Academy and a Chartered Member of Chartered Institute of Logistics and Transport. He has supervised multiple PhD candidates to completion. He is currently a Fellow of the BAM Peer Review College and has completed his role as a BAM council member after 5 years of continuous service to the BAM council.

Professor Ashley Braganza
Dr. Ashley Braganza is the Director of the Research Centre for Artificial Intelligence and Chair of Organisational Transformation at Brunel University of London, UK. He launched and currently hosts The AI Adoption Podcast. He has held various leadership roles, including Dean of Brunel Business School, Deputy Dean and Head of the Department of Economics and Finance. He is a Fellow of the British Academy of Management and a Trustee of the Industry Parliamentary Trust, receiving the 2024 Sustainability Leadership in Education award. As a recognised expert in artificial intelligence (AI) and digital transformation, Dr. Braganza bridges academia and industry. He founded the Research Centre for Artificial Intelligence at Brunel, securing over £3.5 million in research grants. Under his leadership, the centre gained international recognition, including winning a €5.8 million bid for the ELOQUENCE EU project for which he is one co-investigator. He also led the creation of a Masters in AI Strategy to nurture future AI leaders. Dr. Braganza is actively involved in AI policy, contributing to the All-Party Parliamentary Group for AI. His research focuses on AI’s impact on corporate boardrooms, job engagement, and organisational trust. He also oversees Brunel’s AI Living Lab, an interdisciplinary hub for AI research, collaboration, and practical application. Through his work, he has significantly shaped AI's role in both academic and commercial sectors, publishing over 100 papers and completing over 50 consultancy assignments globally.

Professor Maureen Meadows
Maureen Meadows is Professor of Strategic Management in the Centre for Business in Society, Coventry University, UK. Formerly with Open University Business School, UK, and Warwick Business School, UK, she has a background in mathematics, statistics and operational research, and has many years’ experiences of working with 'big data' and customer analytics, both as a practitioner in the financial services sector and an academic. Maureen is Associate Editor at Long Range Planning (ranked CABS 4*) and has been guest editor for Special Issues in many journals including International Journal of Management Reviews and Technological Forecasting and Social Change. Maureen is co-author of Strategy: Theory, Practice, Implementation - Oxford University Press, 2023 (2nd edition). At Coventry University, Maureen is Director of the DBA Programme, and co-leader of a research cluster on ‘Data, Organisations and Society’, with a particular interest in ‘big data’, the strategic decision-making that can result from shared data, and the ethical use of new technologies such as AI. Her interests and expertise are reflected in the developing research theme of the cluster on digital vulnerability; this is defined as susceptibility to harm due to an interaction between the digital resources available to individuals and communities, and the life challenges they face. This research agenda embraces digital exclusion; unfairness and inequality; and the unintended consequences of the digital transition in the economy and society. Maureen is a Fellow of the British Academy of Management, and Chair of the Teaching Community of the Strategy Management Society in 2025. Maureen is a member of CRISP, the Centre for Research into Information, Surveillance and Privacy, a collaboration between the Universities of Stirling, St Andrews, Edinburgh, Essex and Coventry.

Professor Zhicheng Chen 
Dr. Zhicheng Chen is Professor of Technology Economics and Management at Shanghai Lixin University of Accounting and Finance, China, serving as the Director of the Institute of Higher Education and the Development Planning Office. He also holds academic roles as invited researcher at the Fudan Development Research Institute and senior researcher at the Shanghai International Development Research Centre for Traditional Chinese Medicine. He was previously a group member for drafting Shanghai city’s medium and long-term strategic planning. His research interests lie in technological innovation and finance, as well as technology-driven social change and urban sustainable development. He has published over 20 papers in peer-reviewed journals and books. He has presided over and completed more than 10 provincial and ministerial projects, research institution projects, and enterprise-commissioned projects, submitting more than 30 research reports focusing on technological change and governance innovation, technology finance, and urban sustainable development. Many of these reports have been adopted by governments or industries, and he has received three provincial and ministerial decision-making and consulting achievement awards.

Dr. Soumyadeb Chowdhury
Soumyadeb Chowdhury is an associate professor of AI, emerging technologies and digital sustainability at TBS Business School, Toulouse, France. He is currently the interim co-editor-in-chief and associate editor of the British Journal of Management. His research concerns AI management, trust and adoption, digital responsibility, digital sustainability, sustainable supply chain and operations management, human factors in digitalisation, circular economy, employee wellbeing and business productivity. His research has been supported by competitive international funding from prestigious bodies such as UK Research and Innovation, British Academy, Royal Academy of Engineering, British Council, and Grand Challenges Research Funds, and French National Research Agency. His research has been published and cited in leading journals such as British Journal of Management, Human Resource Management, International Journal of Production Economics, Technology Forecasting and Social Change, and International Journal of Production Research.

Submission Instructions: 

We are aiming to publish this Special Issue in the April 2027 issue, with the submission window between 1st and 31st of December 2025. 

Authors should select ‘special issue paper’ as the paper type, answer ‘yes’ to the question ‘Is this submission for a special issue?’ and enter the title of the special issue in the box provided.

The editorial team will work to the timetable below.

  • Submission window: 1- 31 December 2025
  • All Desk Rejects and Assignment of Reviewers completed: 31 January 2026
  • Reviews all in and decisions on papers to be offered revise and resubmit completed: 31 Mar 2026
  • All revised papers received and sent for second round review: 30 Jun 2026
  • Reviews all in and decisions on papers to be offered minor revisions completed: 31 Aug 2026
  • All revised papers received and sent for final review where necessary: 30 Nov 2026
  • Reviews all in and decisions on papers to be included in the issue completed:    28 Feb 2027
  • Special issue published: April 2027

Guest Editors’ Contact Details for Enquiries:

References

Aharonson, B. S., Arndt, F. F., Budhwar, P., Chang, Y. Y., Chowdhury, S., et al. (2025). Establishing a contribution: Calibration, contextualization, construction and creation. British Journal of Management, 36, 481-499.

Bechky, B. A., & Davis, G. F. (2025). Resisting the Algorithmic Management of Science: Craft and Community After Generative AI. Administrative Science Quarterly, 70(1), 1-22.

Borges, A. F. S., Laurindo, F. J. B., Spinola, M. M., Goncalves, R. F., & Mattos, C. A. (2021). The strategic use of artificial intelligence in the digital era: Systematic literature review and future research directions. International Journal of Information Management, 57, Article 102225. 

Brown, O., Davison, R. M., Decker, S., Ellis, D. A., Faulconbridge, J., Gore, J., et al. (2024). Theory-driven perspectives on generative artificial intelligence in business and management. British Journal of Management, 35, 3-23.

Budhwar, P., Chowdhury, S., Wood, G., Aguinis, H., Bamber, G J. et al. (2023). Human resource management in the age of generative artificial intelligence: Perspectives and research directions on ChatGPT. Human Resource Management Journal, 33(3), 606-659. 

Chou, H. M., & Cho, T. L. (2023). Utilizing Text Mining for Labeling Training Models from Futures Corpus in Generative AI. Applied Sciences-Basel, 13(17), Article 9622. 

Chowdhury, S., Budhwar, P., & Wood, G. (2024). Generative artificial intelligence in business: towards a strategic human resource management framework. British Journal of Management, 35(4), 1680-1691.

Cillo, P., & Rubera, G. (2024). Generative AI in innovation and marketing processes: A roadmap of research opportunities. Journal of the Academy of Marketing Science, 1-18.  

Doshi, A. R., Bell, J. J., Mirzayev, E., & Vanneste, B. S. (2025). Generative artificial intelligence and evaluating strategic decisions. Strategic Management Journal, 46(3), 583-610.

Dwivedi, Y. K., N. Kshetri, L. Hughes, E. L. Slade, A. Jeyaraj, A. K. Kar et al. (2023). "So what if ChatGPT wrote it?" Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy’, International Journal of Information Management, 71, Article. 102642.

Grimes, M., Von Krogh, G., Feuerriegel, S., Rink, F., & Gruber, M. (2023). From scarcity to abundance: Scholars and scholarship in an age of generative artificial intelligence. Academy of Management Journal, 66(6), 1617-1624.

Haefner, N., Wincent, J., Parida, V., & Gassmann, O. (2021). Artificial intelligence and innovation management: A review, framework, and research agenda. Technological Forecasting and Social Change, 162, Article 120392.  

Hamilton, R. H., & Sodeman, W. A. (2020). The questions we ask: Opportunities and challenges for using big data analytics to strategically manage human capital resources. Business Horizons, 63(1), 85-95.  

Herath Pathirannehelage, S., Shrestha, Y. R., & von Krogh, G. (2024). Design principles for artificial intelligence-augmented decision making: An action design research study. European Journal of Information Systems, 34(2), 207–229. 

Moharrak, M. & Mogaji, E. (2024). Generative AI in banking: empirical insights on integration, challenges and opportunities in a regulated industry, International Journal of Bank Marketing, Ahead-of-print.

Raisch, S., & Krakowski, S. (2021). Artificial intelligence and management: The automation–augmentation paradox. Academy of Management Review, 46(1), 192-210.

Raisch, S., & Fomina, K. (2024). Combining human and artificial intelligence: Hybrid problem-solving in organizations. Academy of Management Review, 50(2), 441-464. 

Shrestha, Y. R., Ben-Menahem, S. M., & Von Krogh, G. (2019). Organizational decision-making structures in the age of artificial intelligence. California Management Review, 61(4), 66-83.

Simon, H. A. (1976). Administrative behavior: A study of decision-making processes in administrative organization, 3rd ed. Free Press. 

Wong, I. A., Lian, Q. L., & Sun, D. (2023). Autonomous travel decision-making: An early glimpse into ChatGPT and generative AI. Journal of Hospitality and Tourism Management, 56, 253-263.