AI-Data-Poisoning-KrishnaG-CEO

LLM04: Data and Model Poisoning – A C-Suite Imperative for AI Risk Mitigation

At its core, data poisoning involves the deliberate manipulation of datasets used during the pre-training, fine-tuning, or embedding stages of an LLM’s lifecycle. The objective is often to introduce backdoors, degrade model performance, or inject bias—toxic, unethical, or otherwise damaging behaviour—into outputs.

Backdoor-Attacks-KrishnaG-CEO

Backdoor Attacks: A Growing Threat to MSMEs

Backdoor attacks, a stealthy and insidious form of cybercrime, have become a significant concern for businesses of all sizes, including micro, small, and medium-sized enterprises (MSMEs). These attacks involve the insertion of unauthorized access points into software, systems, or networks, enabling attackers to bypass security controls and gain persistent access for malicious purposes.