International Journal of Academic Research in Business and Social Sciences

search-icon

Knowledge Lineage from Isnad to AI: Reframing Authorship and Responsibility in the Generative AI Era

Open access
Generative artificial intelligence (AI) is rapidly transforming scholarly writing and research, raising urgent questions about authorship, intellectual ownership, and epistemic responsibility. This study aims to investigate how knowledge lineage—defined as the traceable chain of transmission, transformation, and attribution-connecting ideas to their origins—can be preserved in an era where AI increasingly mediates knowledge production. Adopting a theoretical–conceptual methodology, the paper integrates Western philosophy (Foucault’s author-function, Kuhn’s paradigm shifts), Floridi’s concept of ectypes, and Islamic epistemology (isn?d, am?na, mas’?liyya), alongside comparative analysis of institutional practices at Harvard, Oxford, and King Saud University. The analysis reveals that while AI accelerates content creation, it simultaneously introduces epistemic opacity, breaks attribution chains, and generates “orphaned” knowledge outputs that lack identifiable provenance. To address these risks, the study proposes a multi-tiered ethical framework that includes mandatory AI-use disclosure, human-in-the-loop verification, metadata-based “AI isn?d,” and institutional accountability mechanisms. This framework bridges global AI ethics (OECD, UNESCO) with culturally embedded models of moral responsibility, reinforcing that human scholars must remain the ethical custodians of knowledge. The paper concludes that preserving knowledge lineage is not merely a technical challenge but a moral and cultural imperative. By embedding transparency, traceability, and accountability into both technical systems and institutional policies, the integrity of scholarly communication can be sustained in an AI-saturated future.
Abdulrahman, M. A. (2024). Cultural and Social Influences on Hadith Classification: An Analytical Study of Historical Transformations. Journal of Ecohumanism, 3(8), 2783-2791. DOI: https://doi.org/10.62754/joe.v3i8.4926
AL-Smadi, M. (2025). IntegrityAI at GenAI Detection Task 2: Detecting Machine-Generated Academic Essays in English and Arabic Using ELECTRA and Stylometry. arXiv preprint arXiv:2501.05476.
https://doi.org/10.48550/arXiv.2501.05476
American Psychological Association. (2017). Ethical principles of psychologists and code of conduct. https://www.apa.org/ethics/code
Barthes, R. (1977). Image-Music-Text (S. Heath, Trans.). Fontana Press. ISBN: 0-00-686135-0.
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021, March). On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency (pp. 610-623).https://doi.org/10.1145/3442188.3445922
Bommasani, R., Hudson, D. A., Adeli, E., Altman, R., Arora, S., von Arx, S., Bernstein, M. S., Bohg, J., Bosselut, A., Brunskill, E., Brynjolfsson, E., Buch, V., Card, D., Castellon, R., Chatterji, N., Chen, A. S., Creel, K., Davis, J. Q., Demszky, D., ... Liang, P. (2021). On the opportunities and risks of foundation models. arXiv.
https://doi.org/10.48550/arXiv.2108.07258
Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press.
Bouhafa, F. (2021). The dialectics of ethics: Moral ontology and epistemology in Islamic philosophy. Journal of Arabic and Islamic Studies, 21(2), 25–54. https://doi.org/10.5617/jais.9368
Brown, J. A. C. (2014). Misquoting Muhammad: The challenge of interpreting the Prophet’s legacy. Oneworld Publications.
Brown, J. A. C. (2018). Hadith: Muhammad’s legacy in the medieval and modern world (2nd ed.). Oneworld Publications.
Cath, C. (2018). Governing artificial intelligence: Ethical, legal and technical opportunities and challenges. Philosophical Transactions of the Royal Society A, 376(2133), 20180080. https://doi.org/10.1098/rsta.2018.0080
Code, L. (1987). Epistemic responsibility. University Press of New England and Brown University Press. https://archive.org/details/epistemicrespons0000code
Committee on Publication Ethics COPE. (2023, February 13). Authorship and AI tools. https://publicationethics.org/guidance/cope-position/authorship-and-ai-tools
Cui, Y., & Widom, J. (2003). Lineage tracing for general data warehouse transformations. The VLDB Journal, 12(1), 41–58. https://doi.org/10.1007/s00778-002-0083-8
Dotan, R., Parker, L. S., & Radzilowicz, J. G. (2024). Responsible adoption of generative AI in higher education: Developing a “points to consider” approach based on faculty perspectives. In Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’24) (pp. 1–16). ACM. https://doi.org/10.1145/3630106.3659023
Efrati, A., Palazzolo, S., & Mascarenhas, N. (2025, February 3). OpenAI is challenging Google—while using its search data. The Information. https://www.theinformation.com/articles/openai-is-challenging-google-while-using-its-search-data
Floridi, L. (2018a). Artificial intelligence, deepfakes and a future of ectypes. Philosophy & Technology, 31(3), 317–321. https://doi.org/10.1007/s13347-018-0325-3
Floridi, L. (2018b). Soft ethics and the governance of the digital. Philosophy & Technology, 31, 1–8. https://doi.org/10.1007/s13347-018-0303-9
Floridi, L. (2019). The logic of information: A theory of philosophy as conceptual design. Oxford University Press. https://doi.org/10.1093/oso/9780198833635.001.0001
Floridi, L., & Cowls, J. (2019). A unified framework of five principles for AI in society. Harvard Data Science Review, 1, 2-15. https://doi.org/10.1162/99608f92.8cd550d1
Foucault, M. (1977). What is an author? In D. F. Bouchard (Ed.), Language, counter-memory, practice: Selected essays and interviews (pp. 113–138). Cornell University Press.
Fricker, M. (2007). Epistemic injustice: Power and the ethics of knowing. Oxford University Press.
Görke, A. (2010). Review of Jonathan A. C. Brown: Hadith: Muhammad’s legacy in the medieval and modern world. Bulletin of the School of Oriental and African Studies, 73(3), 534–536. https://doi.org/10.1017/S0041977X10000467
Ghosal, T., Tiwary, P., Patton, R., & Stahl, C. (2021). Towards establishing a research lineage via identification of significant citations. Quantitative Science Studies, 2(4), 1511–1528. https://doi.org/10.1162/qss_a_00152
Greene, D., Hoffmann, A. L., & Stark, L. (2019). Better, nicer, clearer, fairer: A critical assessment of the movement for ethical artificial intelligence and machine learning. In Proceedings of the 52nd Hawaii International Conference on System Sciences (pp. 2122–2131). https://hdl.handle.net/10125/59651
Hadith Studies. (n.d.). Hadith studies. In Wikipedia. https://en.wikipedia.org/wiki/Hadith_studies
Harvard Extension School. (n.d.). Academic integrity. Harvard University. https://extension.harvard.edu/enrolled-students/academic-integrity/
He, J., Houde, S., & Weisz, J. D. (2025). Which contributions deserve credit? Perceptions of attribution in human–AI co-creation. In Proceedings of the 2025 CHI conference on human factors in computing systems (pp. 1–18). ACM.
https://doi.org/10.48550/arXiv.2502.18357
High-Level Expert Group on Artificial Intelligence. (2019). Ethics guidelines for trustworthy AI. European Commission. https://ec.europa.eu/digital-single-market/en/high-level-expert-group-artificial-intelligence
Humphreys, P. (2009). The philosophical novelty of computer simulation methods. Synthese, 169(3), 615–626. https://doi.org/10.1007/s11229-008-9435-2
IEEE. (2019). Ethically aligned design: A vision for prioritizing human well-being with autonomous and intelligent systems (1st ed.). IEEE Standards Association. https://standards.ieee.org/develop/indconn/ec/autonomous_systems.html
Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399. https://doi.org/10.1038/s42256-019-0088-2
Journal of King Saud University – Science. (n.d.). Ethical guidelines: Use of artificial intelligence (AI). https://jksus.org/ethical-guidelines/
Khosrowi, D., Finn, F., & Clark, E. (2024). Engaging the many-hands problem of generative-AI outputs: A framework for attributing credit. AI and Ethics, 1-19. https://doi.org/10.1007/s43681-024-00440-7
Kuhn, T. S. (1997). The structure of scientific revolutions. University of Chicago Press.
Lemley, M. A. (2024). How generative AI turns copyright upside down. Columbia Science and Technology Law Review, 25(2), 1–45. https://doi.org/10.52214/stlr.v25i2.12761
Lloyd, G. E. R. (1996). Adversaries and authorities: Investigations into ancient Greek and Chinese science. Cambridge University Press. East Asian Science, Technology, and Medicine, 15(1), 121-126. https://doi.org/10.1163/26669323-01501009
Longpre, S., Mahari, R., Chen, A., Obeng-Marnu, N., Sileo, D., Brannon, W., ... & Hooker, S. (2024). A large-scale audit of dataset licensing and attribution in AI. Nature Machine Intelligence, 6(8), 975–987.
Mittelstadt, B. D. (2019). Principles alone cannot guarantee ethical AI. Nature Machine Intelligence, 1(11), 501–507. https://doi.org/10.1038/s42256-019-0114-4
Motzki, H. (2002). The origins of Islamic jurisprudence. Brill.
Nature. (2023). Tools such as ChatGPT threaten transparent science. Nature, 613(7945), 612. https://doi.org/10.1038/d41586-023-00191-1
Nosek, B. A., Alter, G., Banks, G. C., Borsboom, D., Bowman, S. D., Breckler, S. J., Buck, S., Chambers, C. D., Chin, G., Christensen, G., Contestabile, M., Dafoe, A., Eich, E., Freese, J., Glennerster, R., Goroff, D., Green, D. P., Hesse, B., Humphreys, M., ... Yarkoni, T. (2015). Promoting an open research culture. Science, 348(6242), 1422–1425. https://doi.org/10.1126/science.aab2374
OECD. (2019). Recommendation of the council on artificial intelligence. OECD Legal Instruments. https://oecd.ai/en/ai-principles
Perkins, M., Roe, J., Postma, D., McGaughran, J., & Hickerson, D. (2023). Game of tones: Faculty detection of GPT-4 generated content in university assessments. arXiv. https://arxiv.org/abs/2305.18081
Petro, M. (2025, May 20). University at Buffalo students protest use of AI detection tool. The Buffalo News. Republished by GovTech. https://www.govtech.com/education/higher-ed/university-at-buffalo-students-protest-use-of-ai-detection-tool
Reed, B. (2001). Epistemic agency and the intellectual virtues. The Southern Journal of Philosophy, 39(4), 507–526. https://doi.org/10.1111/j.2041-6962.2001.tb01820.x
Russell, S. (2019). Human compatible: AI and the problem of control. Penguin Uk.
Sa?lam, T., & Schmid, L. (2025). Evaluating software plagiarism detection in the age of AI: Automated obfuscation and lessons for academic integrity. arXiv preprint arXiv:2505.20158. https://doi.org/10.48550/arXiv.2505.20158
Samuelson, P. (1985). Allocating ownership rights in computer-generated works. University of Pittsburgh Law Review, 47, 1185–1224.
Samuelson, P. (2023). Generative AI meets copyright. Science, 381(6656), 158-161. https://doi.org/10.1126/science.adj0396
The Harvard Crimson. (2023, September 1). Harvard releases guidance for AI use in classrooms. The Harvard Crimson. https://www.thecrimson.com/article/2023/9/1/fas-ai-guidance/
Thorp, H. H. (2023). ChatGPT is fun, but not an author. Science, 379(6630), 313. https://doi.org/10.1126/science.adg7879
Thorn, P. D. (2015). Nick Bostrom: Superintelligence: Paths, dangers, strategies. Minds and Machines, 25(3), 285–289. https://doi.org/10.1007/s11023-015-9377-7
UNESCO. (2021). Recommendation on the ethics of artificial intelligence. UNESCO. https://unesdoc.unesco.org/ark:/48223/pf0000380455
University of Oxford. (2024). AI and academic practice. Centre for Teaching and Learning. https://www.ctl.ox.ac.uk/ai
University of Pittsburgh Teaching Center. (2023, June 22). Teaching Center doesn’t endorse any generative AI detection tools. University Times. https://www.utimes.pitt.edu/news/teaching-center-doesn-t
University of Pittsburgh Teaching Center. (2025, January 24). Encouraging academic integrity. https://teaching.pitt.edu/resources/encouraging-academic-integrity/
U.S. Copyright Office. (2024). Artificial intelligence and copyright. https://www.copyright.gov/ai/
Weber-Wulff, D., Anohina-Naumeca, A., Bjelobaba, S., Foltýnek, T., Guerrero-Dib, J., Popoola, O., ... & Waddington, L. (2023). Testing of detection tools for AI-generated text. International Journal for Educational Integrity, 19(1), 1-39.. arXiv. https://arxiv.org/abs/2306.15666
Westerlund, M. (2019). The emergence of deepfake technology: A review. Technology Innovation Management Review, 9(11), 39–52.
Wilkinson, M. D., Dumontier, M., Aalbersberg, I. J., Appleton, G., Axton, M., Baak, A., ... Mons, B. (2016). The FAIR guiding principles for scientific data management and stewardship. Scientific Data, 3, 160018. https://doi.org/10.1038/sdata.2016.18
Woodmansee, M., & Jaszi, P. (Eds.). (1994). The construction of authorship: Textual appropriation in law and literature. Duke University Press, 10(2).
Worth, S., Snaith, B., Das, A., Thuermer, G., & Simperl, E. (2024). AI data transparency: An exploration through the lens of AI incidents. arXiv preprint arXiv:2409.03307. https://arxiv.org/abs/2409.03307
Yaqeen Institute. (n.d.). Authenticating hadith and the history of hadith criticism. https://yaqeeninstitute.org/read/paper/authenticating-hadith-and-the-history-of-hadith-criticism
Zednik, C. (2021). Solving the Black Box Problem: A Normative Framework for Explainable Artificial Intelligence. Philos. Technol. 34, 265–288. https://doi.org/10.1007/s13347-019-00382-7
Alshaar, A. M. K. (2025). Knowledge Lineage from Isnad to AI: Reframing Authorship and Responsibility in the Generative AI Era. International Journal of Academic Research in Business and Social Sciences, 15(9), 1131-1154.