Authors - Rehab Abdulmonem Ali Alshireef Abstract - The advent of large language models (LLMs) has marked a turning point in artificial intelligence applications within healthcare. Med-PaLM 2, developed by Google, stands out as a specialized model trained on medical data that has demonstrated expert-level performance on the USMLE. This literature review explores the educational potential of Med-PaLM 2 across different learner levels—medical students, residents, and practicing physicians. It evaluates the benefits, limitations, and contextual challenges of adopting such AI tools in the Arab world, particularly in remote education and clinical skills laboratories. While Med-PaLM 2 offers new opportunities for personalized learning and simulation-based training, its integration must be guided by ethical frameworks, policy development, and regional adaptation efforts to ensure equitable and effective implementation.
Authors - Aliou Ngor Diouf, Ibrahimma Fall, Lamine Diop Abstract - SenSCHOOL Ontology is an extension of the CIDOC-CRM (Conceptual Reference Model) designed to model and integrate information about Tariqas, Sufi religious brotherhoods present in West Africa in its diversity. CIDOC-CRM is a widely used generic data model for exchanging and integrating information from a variety of heterogeneous cultural heritage (CH) sources. The main objective of SenSCHOOL is to facilitate the management, preservation, and exchange of information about Tariqas in a structured and organized manner. We describe the methodology and steps we followed to design SenSCHOOL. We present the implementation of SenSCHOOL, which enables the integration and structuring of information from heterogeneous sources.
Authors - Toni Tani, Lasse Metso, Timo Karri Abstract - Digital transformation is reshaping business ecosystems through advances in artificial intelligence (AI), process automation, enhanced analytics, improved information visualization, and increased innovation. This study examines the impact of AI on ecosystems using traditional bibliometric analysis and a unique approach to processing large volumes of textual data. First, 232 documents published between 2014 and 2024 from the Scopus database were analyzed using Bibliometrix and Biblioshiny to identify influential authors, thematic clusters, and emerging research areas. In the second phase, a text network software called Infranodus was used to scan and analyze the 54 most relevant abstracts from 2023-2024, after which the extracted insights were refined using generative AI (genAI). Subsequently, the extracted information was further developed via prompt engineering from visual graphs and ChatGPT, revealing interesting results that demonstrated the potential of genAI in repeatedly conducting research and managing business ecosystems. Ultimately, this study shows a novel way of combining bibliometric data and visual prompt engineering to harness dynamic relations iteratively.
Authors - Syeda Sohail, Maurice van Keulen Abstract - Process mining enables organizations to gain actionable insights into their business processes by analyzing digital footprints extracted from information systems. These insights unravel inefficiencies and exude process enhancement through bottleneck detection and conformance checking. This paper presents a case study where process mining is applied to five real-world event logs of a Commerce Platform-as-a- Service provider to expedite the business process by reducing waiting times and minimizing multiple customer interactions. A comprehensive process mining project methodology was implemented to conduct the case study. The findings revealed key bottlenecks and underlying factors that contribute to delays and excessive customer interactions. In response, process enhancement recommendations were implemented with the organization’s template adjustments for an efficient business process optimization. The study also addresses the dilemma of privacy-utility tradeoff by ensuring that the event logs adhere to privacy-by-design requirements without compromising the utility of the data. Instead, the fulfilled requirements further refined process mining and data analysis by minimizing and abstracting event logs in this relatively less sensitive domain.
Authors - Salvatore Vella, Fatima Hussain, Salah Sharieh, Alex Ferworn Abstract - Policies and procedures coordinate the work of multiple knowledge workers. These are standardized workflows with specified inputs and outputs. AI agents can automate some or all of the steps in the workflow. The automation will greatly enhance efficiency, minimize human errors, enable the employees to focus on more strategic tasks and provide oversight for these more routine tasks. This paper examines the application of AI agents to understand and automate these workflows. We propose a framework where the policy or procedure is corrected via a large language model and translated to a simplified BPEL (Business Process Execution Language) form for later execution by AI agents. This two-step approach enables the creation of reusable policy and procedure libraries that the AI agents can reuse. We demonstrate that improved policies and procedures can be created from the code. Through case studies, we show the practical benefits in real-world office settings. Integrating AI agents in knowledge work professions is an important research topic; this framework shows how this can be done in a standardized way.We provide the source code and artifacts for these experiments.
Authors - Tsholofetso Taukobong, Audrey Naledi Masizana, George Anderson Abstract - The research contributes to the performance of X-Ray Transmission sensor-based sorting process during diamond sorting. The aim is to overcome the shortcomings of the current baseline methods for detection of small highly over-lapped and irregular shaped rock particles that could go undetected as part of the waste recovery process. Most methods work well when approximate shape and size is well known and particles are not highly overlapped. However due to challenges of over-segmentation or under-segmentation, several image segmentation techniques are explored in order to propose a new and improved segmentation process that aims to reduce false negatives and false positives, thus improving performance and efficiency of the waste recovery process. This reports on classical methods explorations and preliminary experiments on the ongoing research
Authors - Emilia-Loredana Pop, Augusta Ratiu, Daniela-Maria Cristea Abstract - In this article, we have performed an analysis related to the subjects Databases, Database Management Systems, and Web Programming for the students enrolled in Computer Science specializations. The data analyzed has been collected during one university year with the help of an anonymous survey. We have focused on students’ evaluation for these subjects, and comparisons related to gender and study lines (English and Romanian) have also been provided. The lectures and the labs were enjoyed for all the subjects, with small remarks, near to the interaction and communication. For databases, query optimization was harder, and for Web Programming, the solving of lab errors brought challenges. The evaluation was high for the subjects and exceeded 62%.
Authors - Ulrich Tedongmo Douanla, Jean Louis Kedieng Ebongue Fendji, Giquel Therance Sassa, Marcellin Atemkeng Abstract - The Internet has become an essential tool for modern activities and a fundamental right for digital inclusion. However, many regions, particularly in Africa, remain underserved, with limited or unstable Internet access. To address this issue, communities jointly with support organizations have deployed community networks, which are wireless infrastructures that provide connectivity to local populations. Despite their benefits, these networks frequently experience outages that impact both network infrastructure and associated services. Ensuring their reliability requires effective monitoring and supervision solutions. In this work, we propose a supervision platform that leverages the Simple Network Management Protocol (SNMP), log file analysis, and Big Data technologies to enable real-time monitoring of community networks. SNMP is employed to collect device status data, while log files provide insights into the performance of network applications. To facilitate scalable and real-time processing, we integrate Spark Structured Streaming, enabling continuous data analysis and proactive issue detection. The platform also includes an alerting system that delivers notifications via SMS, email, or other channels in case of failures. By providing a comprehensive view of network health and automating incident response, our solution enhances the availability and resilience of community networks, ultimately improving Internet access in underserved regions.
Authors - Vishnu Kumar Abstract - Heart disease remains a leading cause of mortality in the United States, responsible for approximately 1 in 5 deaths in 2022. Modifiable behavioral and lifestyle factors, such as smoking, physical activity, and diet, play a critical role in cardiovascular risk. This study applies a machine learning (ML) approach to predict heart disease risk in the U.S. using data from the 2022 Behavioral Risk Factor Surveillance System (BRFSS). Three ML based classification models were developed using ten key behavioral and lifestyle features: general health perception, days of poor physical and mental health, time since the last checkup, physical activity engagement, average sleep duration, smoking status, e-cigarette use, body mass index (BMI), and alcohol consumption. Among the three ML based classification models, XGBoost exhibited superior performance, achieving an F1-score of 0.92 with balanced precision and recall across both classes. SHAP (Shapley Additive Explanations) was then used to identify the impact of behavioral and lifestyle factors on heart disease risk. Global SHAP analysis revealed that general health, poor mental health, and BMI were the most influential features affecting heart disease risk. Local SHAP analysis showed that the importance of individual features varied across different observations, with factors such as: time since the last checkup, and smoking status significantly influencing heart disease risk for certain individuals. These findings demonstrate the potential of explainable ML techniques to identify actionable, personalized cardiovascular risk factors. The insights gained can help healthcare providers tailor interventions and prevention strategies, prioritize high-risk individuals for early detection, and allocate resources more effectively to reduce the burden of heart disease.
Authors - Fawzy Alsharif, Hasan Kaan Aldemir, Akay Deliorman Abstract - This paper presents image processing techniques for detecting eye diseases such as Vessel Tortuosity (VT), Glaucoma, Central Serous Retinopathy (CSR), and Diabetic Retinopathy (DR). The system supports early symptom detection, condition monitoring, and timely intervention. For VT, green channel extraction, Gaussian blurring, and Otsu thresholding isolate vessels, followed by morphological operations and thinning for curvature analysis. In Glaucoma, contrast enhancement and multi-level Otsu thresholding segment the optic disc and cup, enabling Cup-to-Disc Ratio calculation. For CSR, green channel processing and Gaussian blurring highlight fluid accumulation. In DR, lesion visibility is improved through green channel extraction, blurring, and morphological filtering. This integrated approach enhances image clarity and segmentation, achieving 97%–99% accuracy in early disease detection.
Authors - Doaa Abdelrahman, Heba Aslan, Mahmoud M. Nasreldin, Ghada Elkabbany, Mohamed Rasslan Abstract - The Bitcoin economy has grown significantly and rapidly, reaching an estimated market capitalization of around $1.87 trillion. As a type of cryptocurrency—essentially digital money—Bitcoin enables direct transactions between users without relying on a central authority or intermediary. These transactions are validated by network participants using cryptographic techniques and are permanently stored in a decentralized public ledger known as the blockchain. New Bitcoins are introduced into circulation through a process that is called mining, and they can be traded for conventional currencies, goods, or services. The dramatic increase in Bitcoin’s value has drawn the attention of both cybercriminals aiming to exploit system flaws for profit and researchers working to identify these vulnerabilities, devise protective measures, and anticipate future trends. It outlines the Bitcoin protocol by describing its main components, their functions, and how they interact. Moreover, it explores the foundational cryptographic concepts and existing weaknesses within the Bitcoin infrastructure and concludes by assessing the strength and effectiveness of current security approaches.
Authors - Faycal Fedouaki, Mouhsene Fri, Kaoutar Douaioui, Ayoub El Khairi Abstract - The merging of blockchain and the industrial internet of things (IIoT) will reshape how smart manufacturing systems operate. This paper proposes a conceptual framework for using blockchain's decentralization architecture, cryptographic integrity, and smart contract automation to improve process monitoring in industrial environments. With real-time data collection from IIoT devices and secure transparent Blockchain ledgers, the proposed model addresses important issues such as tampering, interoperability, and latency when it comes to decision making. It also supports real-time analytics through incorporating reduced latencies using edge processing and message queuing. Additional design principles will address scalability issues by layered Blockchain structures and fog computing nodes, allowing the framework to keep pace with rising data volumes and increasing device densities. Even though the model is built on the latest breakthroughs and conforms to the Industry 4.0 paradigms, a prototype and experimental simulations are planned to value its empirical viability. Notably, this work intends to establish a resilient and efficient digital infrastructure for Next Generation industrial process monitoring.
Authors - Davide Paglia, Lorenzo Rigatti, Andrea Sabatini, Fabrizio Venettoni Abstract - In the context of the new emerging trend in capital markets of tokenization of financial assets, the paper explains how financial security can be registered on a market blockchain settled in ECB central bank digital currency, in T+0 time and in compliance with the current European regulatory framework. A particular use case is explored through the design, implementation and use of a new market infrastructure DLT based, an enterprise application called DLT Bond Platform for the issuance of a digital bond settled in European Central Bank wholesale digital currency via a delivery versus payment process, using a layer 2 permissionless blockchain. After introducing the context and the problem statement in the first two sections, a general description of the solution proposed, and its novel contributions are provided in the third section. In the forth section the main components of the DLT Bond Platform are described in detail, both web2 and web3 as well as the related business processes, namely: (i) Management of the entire life cycle of a bond in digital form; (ii) Management of all the settlement phases envisaged by the bond also through atomic transactions for the simultaneous transfer of the securities and the corresponding cash flows (delivery versus payment or "DvP") through the use of the solution made available by the Bank of Italy as part of the European Central Bank initiative called New Technologies for Wholesale Settlement (iii) Identification, authorization and management of users, profiles and the respective roles on chain; (iv) real-time monitoring and audit trail. The final section focuses on the results obtained and on the completion of the validation process, ultimately dwelling on potential future developments.
Authors - Abdulrahman Azab, Paul De Geest, Sanjay K. Srikakulam, Tomas Vondrak, Mira Kuntz, Bjorn Gruning Abstract - Effective resource scheduling is critical in high-performance (HPC) and high-throughput computing (HTC) environments, where traditional scheduling systems struggle with resource contention, data locality, and fault tolerance. Meta-scheduling, which abstracts multiple schedulers for unified job allocation, addresses these challenges. Galaxy, a widely used platform for data-intensive computational analysis, employs the Total Perspective Vortex (TPV) system for resource scheduling. With over 550,000 users, Galaxy aims to optimize scheduling efficiency in large-scale environments. While TPV offers flexibility, its decision-making can be enhanced by incorporating real-time resource availability and job status. This paper introduces the TPV Broker, a meta-scheduling framework that integrates real-time resource data to enable dynamic, data-aware scheduling. TPV Broker enhances scalability, resource utilization, and scheduling efficiency in Galaxy, offering potential for further improvements in distributed computing environments.
Authors - Nina Valchkova, Vasil Tsvetkov Abstract - This paper investigates the dynamic characteristics of a collaborative robotic mobile platform with enhanced manipulability. It`s motion parameters, such as linear velocities and accelerations, and their influence on platform control are analyzed. The experiments performed include monitoring acceleration processes, constant lateral movement, and deceleration and braking phases. The presented graphical analyses demonstrate key features of the platform dynamics that can be used to optimize the control of a collaborative robotic mobile platform.
Authors - Sakshi Tiwari, Snigdha Bisht, Kanchan Sharma Abstract - Effective waste management is critical to achieving sustainability in urban regions like Delhi-NCR, where heterogeneous waste streams pose a classification challenge. In this research, we propose WasteIQNet, an intelligent deep hybrid model designed for precise waste classification across 18 categories under a well-defined hierarchy: Wet (Compostable, Special_Disposal) and Dry (Recycle, Reduce, Reuse). Leveraging the WEDR dataset, we first standardized over 1.75 lakh images via JPEG conversion, 256×256 resizing, and RGB formatting. SMOTE+ENN was applied to balance class distributions to 20,000 images each. Feature extraction was achieved through simulated DASC-like global vector embeddings using MobileNetV3. Our baseline hybrid model integrated MobileNetV3Large and GraphSAGE, achieving an initial accuracy of 80.56%. After optimizing the model for multi-label learning through sigmoid activation, threshold-based decoding, and hierarchical label interpretation, we conducted extensive enhancements. Hyperparameter tuning with Optuna, Feature-wise Attention (FWA), and Top-K Mixture of Experts (TopK-MoE) improved accuracy to 83.33%. Subsequent normalization and activation function experiments (Mish, Swish, GELU) led to a peak accuracy of 94.44% using GELU. We further introduced Dynamic Sparse Training (DST) and Model-Agnostic Meta-Learning (MAML), raising accuracy to 95.04%. Final enhancements included label smoothing and early stopping, culminating in a best-in-class accuracy of 97.87%. WasteIQNet demonstrates a scalable, interpretable, and high-performance solution for automated waste classification, supporting smart city initiatives and responsible environmental management.