0:alnemari
1:research
2:publications
3:lab
arch linux alnemari@tabuk
[0] 0:bash* 1:vim 2:htop 3:python "alnemari@arch"
Mohammed Alnemari
[ identity verified ]
SYSTEM ONLINE // ARCH LINUX // OPEN FOR COLLABORATION

Mohammed Alnemari

role: Assistant Professor & Department Chair

loc: Computer Engineering, University of Tabuk // AIST Research Center // Saudi Arabia

AI systems engineer turning large neural networks into deployable edge systems. I combine pruning, quantization, tensor decomposition, and knowledge distillation to compress DNNs for microcontrollers, FPGAs, and embedded GPUs. Ph.D. from UC Irvine. 12+ publications. IEEE Best Paper Award winner.

publications.log
12+
peer-reviewed papers
awards.log
IEEE EDGE best paper
research.log
8+
active directions
education.log
Ph.D.
UC Irvine // CompEng

What I Work On

Building efficient AI systems at the intersection of deep learning theory, compression algorithms, and real hardware deployment.

EDGE_AI

Edge AI & TinyML

Deploying DNNs on ARM Cortex-M, RISC-V, Jetson, and FPGAs under strict latency, memory, and power budgets. Building portable inference engines.

COMPRESS

Neural Network Compression

Filter pruning, quantization (INT8/binary), tensor decomposition, and knowledge distillation — 10-100x compression with minimal accuracy loss.

IN_MEMORY

In-Memory AI Computing

RRAM crossbar-based analog accelerators breaking the von Neumann bottleneck. Fault-tolerant architectures for resistive memory devices.

CONFORMAL

Conformal Prediction

Distribution-free statistical methods providing coverage guarantees for compression hyperparameters — replacing expensive grid search.

BNN

Binary Neural Networks

32x compression through 1-bit weights. Studying loss landscape geometry and training dynamics to make BNNs practical and reliable.

GEOMETRY

Neural Operators & Geometry

Equivariant neural networks and structured pruning respecting symmetry. Geometric deep learning for scientific computing.

Publications

12+ peer-reviewed papers spanning edge computing, neural compression, hardware optimization, and brain-computer interfaces.

~/research/publications $ cat --all
2026
M. Alnemari, R. Qureshi, N. Bagherzadeh
arXiv:2603.07365
NEW arXiv
2025
M. Alnemari
IEEE EdgeCom 2025 & arXiv:2511.17242
2025
Innovative FIR Filter Design Using L1/L2 Regularization for Sparsity and Smoothness
M. Nerma, A. Elfaki, A. Bushnag, M. Alnemari
Electronics, vol. 14, no. 22, p. 4386
2024
Ultimate Compression: Joint Quantization and Tensor Decomposition for Compact Models on the Edge
M. Alnemari, N. Bagherzadeh
Applied Sciences, vol. 14, no. 20, p. 9354
2023
AI-Enabled Energy Management Device for Sustainable Quality of Life
H. Samkari, O. Albalawi, T. AlKhudaydi, M. Alnemari, M. Allehyani
2023 Saudi Arabia Smart Grid (SASG)
2022
Y. Qiao, M. Alnemari, N. Bagherzadeh
IEEE ICIT 2022
11 citations arXiv
2022
H. Kim, M. Alnemari, N. Bagherzadeh
PeerJ Computer Science, vol. 8, e924
2019
M. Alnemari, N. Bagherzadeh
IEEE International Conference on Edge Computing (EDGE)
BEST PAPER 15 citations IEEE
2017
M. Alnemari
M.S. Thesis, University of California, Irvine
15 citations

Latest Preprints

Automatically fetched from arXiv monthly. All papers with my name as author.

Mohammed Alnemari, Rizwan Qureshi, Nader Begrazadah
arXiv:2603.07365v1 // cs.LG // 2026-03-07

Neural scaling laws describe how model performance improves as a power law with size, but existing work focuses on models above 100M parameters. The sub-20M regime -- where TinyML and edge AI operate -- remains unexamined. We train 90 models (22K--19.8M parameters) across two architectures (plain...

Mohammed Alnemari
arXiv:2511.17242v1 // cs.CV // 2025-11-21

This paper presents a novel framework combining group equivariant convolutional neural networks (G-CNNs) with equivariant-aware structured pruning to produce compact, transformation-invariant models for resource-constrained environments. Equivariance to rotations is achieved through the C4 cyclic...

Ye Qiao, Mohammed Alnemari, Nader Bagherzadeh
arXiv:2208.00883v1 // eess.SP // 2022-07-26

This paper proposes a novel two-stage framework for emotion recognition using EEG data that outperforms state-of-the-art models while keeping the model size small and computationally efficient. The framework consists of two stages; the first stage involves constructing efficient models named EEGN...

The Alnemari Lab

Bridging the gap between large-scale deep learning and real-world edge deployment — making AI smaller, faster, and more reliable.

core

TinyML & Edge Inference

Designing lightweight inference engines and deployment pipelines that bring deep learning to microcontrollers, embedded systems, and resource-constrained devices.

keywords: portable runtimes, MCU deployment, on-device AI
core

Neural Network Compression

Structured pruning, quantization, tensor decomposition, and binary neural networks — reducing model size by 10-100x while preserving accuracy for edge deployment.

keywords: pruning, quantization, knowledge distillation
core

Neural Scaling Laws

Understanding how model performance scales in the sub-20M parameter regime where TinyML operates — revealing distinct error dynamics in tiny models.

keywords: scaling behavior, error analysis, tiny regime
collaboration

Hardware-Software Co-Design

Co-designing neural architectures with emerging hardware — from in-memory computing accelerators to custom SoCs for energy-efficient AI at the edge.

keywords: ReRAM, analog computing, accelerators
collaboration

AI for Sustainability & Industry

Applying edge AI and computer vision to real-world industrial challenges — environmental monitoring, smart infrastructure, and energy management systems.

keywords: environmental AI, smart systems, IoT
core

Reliable Machine Learning

Conformal prediction, uncertainty quantification, and statistical guarantees for ML systems — ensuring trustworthy AI in safety-critical edge applications.

keywords: conformal prediction, calibration, guarantees

Skills & Tools

PyTorch TensorRT Quantization Pruning Knowledge Distillation Tensor Decomposition Python C++ CUDA FPGA (Xilinx Zynq) RISC-V Verilog LLM Fine-Tuning Transformers Neural Operators Equivariant NNs Binary Neural Networks Edge Deployment Raspberry Pi NVIDIA Jetson IoT Frameworks Arch Linux Git REST APIs Benchmarking

Let's Collaborate

Open to research collaborations, industry partnerships, and grant proposals in Edge AI, TinyML, and neural network compression.