https://www.isujournals.ph/index.php/ject/issue/feedIsabela State University Linker: Journal of Engineering, Computing and Technology2025-08-21T00:00:00+00:00[ISU Linker Journals: JECT] Chief Editor Edward B. Panganibanisulinkerjournal@isu.edu.phOpen Journal Systems<p>The <strong>Isabela State University Linker: Journal of Engineering, Computing and Technology</strong> publishes research papers on various engineering disciplines, applications, and interdisciplinary topics. It covers areas such as electrical and electronics engineering, engineering mathematics and physics, civil engineering, agricultural and biosystems engineering, computing science and information technology, and e-government and e-commerce. The journal also covers computational and stochastic methods, optimization, nonlinear dynamics, modeling and simulation, and computational electromagnetics. It also covers sustainable building materials, seismic design, water resources management, and intelligent transportation infrastructure. The journal also covers computer science and information technology, including information systems, theory, algorithms, and data mining.</p>https://www.isujournals.ph/index.php/ject/article/view/207Development of an Integrated Natural and Socioeconomic Indicators Monitoring System for Bulacan Using Earth Intelligence Tools2025-05-20T01:32:58+00:00Mark Neil Pascualmarkneilpascual@gmail.comJeffrey Leonenjtleonen@amaes.edu.ph<p><span style="font-weight: 400;">This study introduces an Integrated Natural and Socioeconomic Indicators Monitoring System for Bulacan, designed to strengthen local flood risk assessment and enhance disaster preparedness. Recognizing gaps in current methodologies, this research integrates Earth intelligence tools, incorporating real-time satellite imagery, geospatial analyses, socioeconomic indicators, and localized datasets. The primary objective is to provide an accurate, dynamic platform for predictive analytics, prescriptive mitigation, and real-time flood risk monitoring. Methodologically, the study employs a Single Page Application (SPA) architecture hosted on cloud infrastructure, utilizing React.js for visualization and a Django REST API for data management. Stakeholder evaluations conducted in Meycauayan City and surrounding barangays revealed significant improvements in flood prediction accuracy, enhanced decision-making speed, and increased overall user satisfaction. Challenges encountered during system implementation included complexities in data integration and maintaining consistent responsiveness during peak usage. The research concludes that the developed system effectively addresses critical limitations of fragmented disaster response mechanisms. Key recommendations include continuous technological enhancement, comprehensive stakeholder training, and broader integration of hazard datasets to further augment disaster resilience capabilities in Bulacan.</span></p>2025-06-30T00:00:00+00:00Copyright (c) 2025 Isabela State University Linker: Journal of Engineering, Computing and Technologyhttps://www.isujournals.ph/index.php/ject/article/view/210Real-Time Sign Language Recognition and Translation: A Mobile Solution Using Convolutional Neural Network2025-05-26T01:18:22+00:00Jessica Reshelle Narajajessicareshellenaraja@gmail.comClient Joseph Leysonleysonclientjoseph@gmail.comJessica Rose Fernandezjezzfernandez15@gmail.com<p><br />This study presents a mobile application for sign language recognition and translation using a convolutional neural network (CNN) to overcome communication barriers for the deaf community. Unlike existing solutions, the app uses a CNN trained on a dataset of 200–450 images per sign to process hand images via preprocessing, feature extraction, and hand landmark detection, accurately recognizing sign language gestures. The application underscores a user-friendly interface and is designed for real-time mobile use. Employing CNN-based image processing, it translates hand movements into gestures with high precision, achieving 96% accuracy and a loss of 0.069 after 100 training epochs with a batch size of 40. Usability testing, conducted using the System Usability Scale (SUS) questionnaire, revealed high user satisfaction, with positive feedback on usability, functionality, maintainability, and efficiency. The average SUS score indicates an excellent usability. Further evaluation criteria included precision, recall, and F1-score, all of which demonstrated strong performance. The system effectively bridges the communication gap between the deaf and hearing communities, fostering more accessible and meaningful interactions.</p>2025-06-30T00:00:00+00:00Copyright (c) 2025 Isabela State University Linker: Journal of Engineering, Computing and Technologyhttps://www.isujournals.ph/index.php/ject/article/view/208Deepfake Speech Detection: Identifying AI-Generated and Real Human Voices Using Hybrid Convolutional Neural Network and Long Short-Term Memory Model2025-05-20T00:21:17+00:00Marc Lauretamarclaureta@gmail.comJohn Maynardk Atienzajohnmaynardk.atienza@neu.edu.phJohn Lemuel Tapenjohnlemuel.taepl@neu.edu.ph<p><br />This study explored deepfake audio detection using English and Tagalog datasets to enhance multilingual speech classification. The rise of synthetic media, particularly deepfake audio, raises concerns about misinformation, security, and authenticity. To address this, the researchers developed a web-based detection system using a hybrid Convolutional Neural Network and Long Short-Term Memory Model (CNN-LSTM) model, which captured spatial and temporal features for accurate classification. The approach leveraged Mel spectrograms, convolutional layers for spatial patterns, and LSTM networks for temporal dependencies. Trained on an augmented dataset of over 176,000 samples and fine-tuned using TensorFlow, the model achieved 98.65% accuracy, with a precision of 98.60% and a recall of 98.76%. The system employed class weighting to address imbalance and used mixed-precision training for efficiency. Its architecture included Conv2D layers with Batch Normalization and MaxPooling, followed by TimeDistributed Dense layers and an LSTM for sequential modeling. Regularization and callbacks optimized performance, which was evaluated using accuracy, precision, recall, F1-score, and a confusion matrix. Results confirmed its efficacy in distinguishing real and AI-generated voices, mitigating risks from synthetic speech. Future work may refine dataset diversity and optimize system responsiveness for broader real-world implementation.</p>2025-06-30T00:00:00+00:00Copyright (c) 2025 Isabela State University Linker: Journal of Engineering, Computing and Technologyhttps://www.isujournals.ph/index.php/ject/article/view/213HadouQen: Adaptive AI Agent Using Reinforcement Learning in Street Fighter II: Special Champion Edition2025-05-20T00:40:16+00:00Isaiah Phil Pangilinanisaiahphilpangilinan@gmail.comNeo Alaric Villanuevaneoalaric.villanueva@neu.edu.phIrish Paulo Tipayiprtipay@neu.edu.phAudrey Lyle Diegoalddiego@neu.edu.ph<p><br />This study presents the development of an AI agent trained using Proximal Policy Optimization (PPO) to compete in Street Fighter II: Special Champion Edition. The agent learned optimal combat strategies through reinforcement learning, processing visual input from frame-stacked grayscale observations (84 × 84 pixels) obtained through the OpenAI Gym Retro environment. Using a convolutional neural network architecture with carefully tuned hyperparameters, the model was trained across 16 parallel environments over 100 million timesteps. The agent was tested against M. Bison, the game's final boss and most challenging opponent, across 1,000 consecutive matches to evaluate performance. Results showed exceptional performance with a 96.7%-win rate and an average reward of 0.912. Training metrics revealed a healthy learning progression, showing steady improvement in average reward per episode, decreased episode length indicating more efficient victories, and stable policy convergence. The findings also demonstrate the effectiveness of PPO-based reinforcement learning in mastering complex fighting game environments and provide a foundation for future research in competitive game-playing agents capable of human-level performance in fast-paced interactive scenarios.</p>2025-06-30T00:00:00+00:00Copyright (c) 2025 Isabela State University Linker: Journal of Engineering, Computing and Technologyhttps://www.isujournals.ph/index.php/ject/article/view/206Transcribing Filipino Syllables into Baybayin Script Using Convolutional Neural Network with Long Short-Term Memory Architecture for Spoken Tagalog Recognition2025-05-20T07:07:41+00:00Steven Episstevenepis@gmail.comChelsea Mariae Panugalingjagoylo@southernleytestateu.edu.phAbegail Jameljagoylo@southernleytestateu.edu.phDaisy Deciojagoylo@southernleytestateu.edu.phJose Agoylojagoylo@southernleytestateu.edu.ph<p><br />This study describes the use of machine learning technologies in the conversion of spoken Tagalog from syllables to the Baybayin script, which was used in the Philippines long before the coming of the Spaniards. The model integrated audio data that dealt with phonetic aspects and correct mapping to Baybayin symbols. The model's overall accuracy is 96%, which in turn shows the reliability of performance in segment speaking of Tagalog into Baybayin text. The CNN-LSTM architecture was proved effective, underscoring the potential of advanced speech recognition technologies for cultural preservation. By modeling the phonetic-to-symbolic relationships in spoken Tagalog, the system offers valuable contributions to linguistic research, especially in areas such as phonology, orthography, and morpho-syllabic analysis of Filipino, thereby bridging traditional scripts and modern language technologies. This study emphasizes the need for further dataset expansion and support for diverse linguistic variations to enhance the system’s inclusivity and applicability. It is an important addition to technology-based cultural preservation, paving the way for similar projects in other languages and scripts. Further studies may enhance this system and make a transcription into sentences, phrases, and paragraphs.</p>2025-06-30T00:00:00+00:00Copyright (c) 2025 Isabela State University Linker: Journal of Engineering, Computing and Technologyhttps://www.isujournals.ph/index.php/ject/article/view/182Predicting Undergraduate Applicants for Enrollment Using Binary Classification Machine Learning Techniques2025-04-24T06:46:29+00:00Christian Guillermochristian.g.guillermo@isu.edu.ph<p>This study aimed to develop a binary classification machine learning model to predict undergraduate applications for enrollment. It used the 2024 student admissions data, such as the applicant’s general weighted average, College Admission Test results, interview score, and personal information, and employed Logistic Regression, Naïve Bayes, K-Nearest Neighbors, and Support Vector Machine models. The dataset was preprocessed using imputation, one-hot label encoding, standardization, and SMOTE to handle the class imbalance. The model performance was evaluated using accuracy, precision, recall, and F1 score, with Support Vector Machine emerging as the best-performing model, with an accuracy of 82%. To enhance model transparency and stakeholder trust, explainability methods under Explainable AI (XAI) were employed to interpret how and why predictions were made. These findings support the ethical use of artificial intelligence in admissions and provide a policy framework for a data-driven selection process. The model’s predictive and interpretative capabilities can help the university streamline the admission process, optimize resources, and maintain fairness. Future researchers can include real-time data and broader factors to improve the adaptation and support inclusive education goals that are associated with SDGs and the Times Higher Education Impact Rankings.<br /><br /></p>2025-06-30T00:00:00+00:00Copyright (c) 2025 Isabela State University Linker: Journal of Engineering, Computing and Technologyhttps://www.isujournals.ph/index.php/ject/article/view/168Development and Performance Evaluation of Locally Manufactured Tractor-Drawn Plastic Mulch-Laying Implement Using Disc Plow at Different Forward Speeds2025-04-24T06:55:51+00:00Everich Ramoseverich.p.ramos@isu.edu.phCesar Mangadapeverich.p.ramos@isu.edu.phJoel Alcarazeverich.p.ramos@isu.edu.phJeoffrey Lloyd Barengeverich.p.ramos@isu.edu.ph<p><br />The plastic mulch-laying implement was locally developed for the mechanical application of plastic mulch. The implement was developed using locally available materials, resulting in a lower cost of production. It was evaluated to test the soil covering performance using a disc plow at different forward speeds. The implement was evaluated at five forward speeds. The effect of the forward speeds on actual field capacity, field efficiency, and unsecured mulch was studied. The results of the experiment showed that the average actual field capacity is 0.14 ha/hr (1.12 ha/day). At low speed, the actual field capacity decreased, and unsecured mulch increased by an average of 13% and 6%, respectively, compared to the forward speed. The highest field efficiency of 46.14% was achieved at a forward speed. By means of benefit-cost ratio analysis, the plastic mulch-laying implement showed that it is economically viable. The results of this study showed that the plastic mulch-laying implement is reliable and efficient for plastic mulch application. Hence, it recommends using the plastic mulch-laying implement to make raised beds and lay plastic mulch at a forward speed.</p>2025-06-30T00:00:00+00:00Copyright (c) 2025 Isabela State University Linker: Journal of Engineering, Computing and Technologyhttps://www.isujournals.ph/index.php/ject/article/view/211Hybrid Convolutional-Recurrent Neural Networks (CNN-RNN) Model with Temporal Attention and Particle Swarm Optimization for Deepfake Video Detection2025-05-20T00:32:25+00:00Jeremias Esperanzajeremiasesperanza@gmail.comJean Fidelio Marquezjeanfidelio.marquez@neu.edu.phRon Anthony Syronanthony.sy@neu.edu.ph<p><br />The rapid advancement of deepfake technology presents a growing threat to information integrity and online security. To address this, this research proposed an efficient deepfake video detection framework that integrates Convolutional Neural Networks (CNNs) for spatial feature extraction, Recurrent Neural Networks (RNNs) with a temporal attention mechanism for modeling sequential dependencies, and Particle Swarm Optimization (PSO) for hyperparameter tuning. The pipeline included frame extraction, face alignment, and feature processing using a pre-trained CNN, followed by an RNN that emphasizes critical temporal artifacts through attention. PSO further enhanced model performance by optimizing key hyperparameters such as learning rate and hidden dimensions. To evaluate the effectiveness of the proposed model, a comparative analysis against existing deepfake detection methods, including XceptionNet, LSTM with frame-level features, and CNN-GRU without attention, was conducted. The proposed CNN-RNN model with Temporal Attention and PSO outperformed the baselines, demonstrating the model's improved generalization and reliability, particularly in reducing false negatives, making it a robust solution for real-world media forensics and platform integrity.</p>2025-06-30T00:00:00+00:00Copyright (c) 2025 Isabela State University Linker: Journal of Engineering, Computing and Technologyhttps://www.isujournals.ph/index.php/ject/article/view/212Convolutional Neural Network-Based Ground Coffee Bean Classification in the Philippines2025-05-20T00:47:39+00:00Marc Lauretamarclaureta@gmail.comMeldrick Jake Carabeomeldrickjake.carabeo@neu.edu.phAerold Torregozaaerold.torregoza@neu.edu.phMilca Lianne Fulomilcalianne.fulo@neu.edu.ph<p>While existing classification methods rely primarily on visual inspection or limited technological approaches, this research introduced a Convolutional Neural Network model specifically designed to address the challenges of classifying four major coffee varieties found in the Philippines: Arabica, Excelsa, Liberica, and Robusta. A comprehensive dataset of 1,817 ground coffee bean images captured in different lighting conditions, background colors, camera angles, and elevations was collected. To mitigate these challenges, advanced preprocessing and augmentation techniques were employed, including strategic resizing, flipping, and normalization to enhance the model's generalizability. The dataset was strategically divided into 80% training, 10% validation, and 10% testing sets to ensure efficient model performance. Utilizing TensorFlow and Keras on Kaggle, the CNN model was developed and subsequently deployed via a web-based application using Flask and HTML, offering an innovative, user-friendly interface for coffee bean classification. The model achieved a high overall classification accuracy of 96%, with Robusta and Arabica varieties demonstrating perfect classification. Thus, CNNs can effectively support the Philippine coffee industry by automating bean classification. Future work may focus on expanding the dataset to capture greater variability, refining the model—particularly for Excelsa and Liberica varieties, exploring advanced machine learning techniques to improve consistency and real-world deployment, and integrating the model into real-time classification systems to support broader adoption in the coffee industry.</p>2025-06-30T00:00:00+00:00Copyright (c) 2025 Isabela State University Linker: Journal of Engineering, Computing and Technology