
Andre Catarino, Claudia Mamede, Rui Melo, Rui Abreu
ICSE 2026 - NIER A*Accepted 2026
Large Language Model (LLM) agents are increasingly embedded in software engineering (SE) workflows—planning, coding, testing, and CI/CD. Failures are frequent: prompt injection, unsafe tool use, supply-chain contamination, and memory poisoning. Existing defences—such as static analysers, provenance attestation, and prompt guardrails—are insufficient: they typically audit after-the-fact or operate without cryptographic guarantees or runtime enforcement. We propose TraceCaps, a runtime approach that (i) attaches cryptographically verifiable provenance capsules to each agent step (e.g. prompt, memory, tool call), and (ii) computes a monotone, persistent risk score that gates tool actions inline via policy thresholds (allow, warn, block). Capsules hash and sign events, link to parents, and embed risk features; an accumulator prevents “risk laundering” by subsequent benign steps. Early demonstrations on SWE-bench illustrate how TraceCaps can expose unsafe behaviors and apply runtime governance through risk accumulation, pointing toward enforceable agent safety. To our knowledge, TraceCaps is the first approach to bind provenance and risk into a single cryptographic substrate, pointing toward a shift from passive audit to runtime, enforceable safety in agentic SE workflows.
Andre Catarino, Claudia Mamede, Rui Melo, Rui Abreu
ICSE 2026 - NIER A*Accepted 2026
Large Language Model (LLM) agents are increasingly embedded in software engineering (SE) workflows—planning, coding, testing, and CI/CD. Failures are frequent: prompt injection, unsafe tool use, supply-chain contamination, and memory poisoning. Existing defences—such as static analysers, provenance attestation, and prompt guardrails—are insufficient: they typically audit after-the-fact or operate without cryptographic guarantees or runtime enforcement. We propose TraceCaps, a runtime approach that (i) attaches cryptographically verifiable provenance capsules to each agent step (e.g. prompt, memory, tool call), and (ii) computes a monotone, persistent risk score that gates tool actions inline via policy thresholds (allow, warn, block). Capsules hash and sign events, link to parents, and embed risk features; an accumulator prevents “risk laundering” by subsequent benign steps. Early demonstrations on SWE-bench illustrate how TraceCaps can expose unsafe behaviors and apply runtime governance through risk accumulation, pointing toward enforceable agent safety. To our knowledge, TraceCaps is the first approach to bind provenance and risk into a single cryptographic substrate, pointing toward a shift from passive audit to runtime, enforceable safety in agentic SE workflows.

Rui Melo, Sofia Reis, Andre Catarino, Rui Abreu
ICST 2026 AAccepted 2026
Abstract—Large Language Models (LLMs) are increasingly integrated into software development and testing workflows, offering the promise of automated code generation, test synthesis, and program repair. However, ensuring the security of LLM-generated code remains a critical challenge for software verification and validation, as these models may inadvertently learn and propagate insecure patterns from their training data. In this paper, we present a probabilistic testing framework for evaluating the security alignment of code LLMs, analyzing their internal behavior across three dimensions: fluency (does the code appear natural?), preference (which version is the model more likely to generate?), and confidence (how certain is the model about its choice?). Using Delta-Secommits, a 2,422 real-world vulnerability-patch pairs spanning 25 CWE categories, we conduct the first empirical study of how code LLMs probabilistically favor secure versus insecure code. Our results reveal a significant security misalignment: LLMs exhibit a bias toward insecure code in approximately 92% of cases. Even when secure code is as fluent or confidently predicted, models still prefer the vulnerable version in the vast majority of comparisons. For researchers, our findings extend existing evaluation frameworks by introducing probabilistic security alignment, measuring not only generated outputs, but also the likelihoods that drive them. For tool builders, the implication is clear: AI coding assistants must be designed for and tested against secure defaults, or they risk amplifying vulnerabilities at scale.
Rui Melo, Sofia Reis, Andre Catarino, Rui Abreu
ICST 2026 AAccepted 2026
Abstract—Large Language Models (LLMs) are increasingly integrated into software development and testing workflows, offering the promise of automated code generation, test synthesis, and program repair. However, ensuring the security of LLM-generated code remains a critical challenge for software verification and validation, as these models may inadvertently learn and propagate insecure patterns from their training data. In this paper, we present a probabilistic testing framework for evaluating the security alignment of code LLMs, analyzing their internal behavior across three dimensions: fluency (does the code appear natural?), preference (which version is the model more likely to generate?), and confidence (how certain is the model about its choice?). Using Delta-Secommits, a 2,422 real-world vulnerability-patch pairs spanning 25 CWE categories, we conduct the first empirical study of how code LLMs probabilistically favor secure versus insecure code. Our results reveal a significant security misalignment: LLMs exhibit a bias toward insecure code in approximately 92% of cases. Even when secure code is as fluent or confidently predicted, models still prefer the vulnerable version in the vast majority of comparisons. For researchers, our findings extend existing evaluation frameworks by introducing probabilistic security alignment, measuring not only generated outputs, but also the likelihoods that drive them. For tool builders, the implication is clear: AI coding assistants must be designed for and tested against secure defaults, or they risk amplifying vulnerabilities at scale.

Andre Catarino, Rui Melo, Luis Cruz, Rui Abreu
AAAI 2026 AI4ES Workshop A*Published 2026
The widespread adoption of dynamic Time-of-Use (dToU) electricity tariffs requires accurately identifying households that would benefit from such pricing structures. However, the use of real consumption data poses serious privacy concerns, motivating the adoption of synthetic alternatives. In this study, we conduct a comparative evaluation of four synthetic data generation methods, Wasserstein-GP Generative Adversarial Networks (WGAN), Conditional Tabular GAN (CTGAN), Diffusion Models, and Gaussian noise augmentation, under different synthetic regimes. We assess classification utility, distribution fidelity, and privacy leakage. Our results show that architectural design plays a key role: diffusion models achieve the highest utility (macro-F1 up to 88.2%), while CTGAN provide the strongest resistance to reconstruction attacks. These findings highlight the potential of structured generative models for developing privacy-preserving, data-driven energy systems.
Andre Catarino, Rui Melo, Luis Cruz, Rui Abreu
AAAI 2026 AI4ES Workshop A*Published 2026
The widespread adoption of dynamic Time-of-Use (dToU) electricity tariffs requires accurately identifying households that would benefit from such pricing structures. However, the use of real consumption data poses serious privacy concerns, motivating the adoption of synthetic alternatives. In this study, we conduct a comparative evaluation of four synthetic data generation methods, Wasserstein-GP Generative Adversarial Networks (WGAN), Conditional Tabular GAN (CTGAN), Diffusion Models, and Gaussian noise augmentation, under different synthetic regimes. We assess classification utility, distribution fidelity, and privacy leakage. Our results show that architectural design plays a key role: diffusion models achieve the highest utility (macro-F1 up to 88.2%), while CTGAN provide the strongest resistance to reconstruction attacks. These findings highlight the potential of structured generative models for developing privacy-preserving, data-driven energy systems.

Rui Melo
ISSTA/FSE 2025 Joint Doctoral Symposium Published 2025
The integration of Large Language Models (LLMs) into software development workflows has transformed automated programming but introduced significant security challenges. LLMs often generate vulnerable code due to the insecure patterns present in training data, leading to the generation of code vulnerable to threats such as SQL injection, cross-site scripting, and buffer overflows. Existing mitigation strategies, including static and dynamic analysis tools and prompt engineering, are reactive rather than preventive. Recent advances in model training, such as fine-tuning and adversarial training, offer promising avenues for enhancing the security of LLM-generated code. This paper explores different methodologies and proposes an evaluation framework to embed security directly into AI-assisted programming. By integrating security into model training and assessment, we aim to establish a robust standard for secure AI-driven programming.
Rui Melo
ISSTA/FSE 2025 Joint Doctoral Symposium Published 2025
The integration of Large Language Models (LLMs) into software development workflows has transformed automated programming but introduced significant security challenges. LLMs often generate vulnerable code due to the insecure patterns present in training data, leading to the generation of code vulnerable to threats such as SQL injection, cross-site scripting, and buffer overflows. Existing mitigation strategies, including static and dynamic analysis tools and prompt engineering, are reactive rather than preventive. Recent advances in model training, such as fine-tuning and adversarial training, offer promising avenues for enhancing the security of LLM-generated code. This paper explores different methodologies and proposes an evaluation framework to embed security directly into AI-assisted programming. By integrating security into model training and assessment, we aim to establish a robust standard for secure AI-driven programming.

Pierre Colombo, Telmo Pires, Malik Boudiaf, Rui Melo, Dominic Culver, Sofia Morgado, Etienne Malaboeuf, Gabriel Hautreux, Johanne Charpentier, Michael Desa
NeurIPS A* 2024
In this paper, we introduce SaulLM-54B and SaulLM-141B, two large language models (LLMs) tailored for the legal sector. These models, which feature architectures of 54 billion and 141 billion parameters, respectively, are based on the Mixtral architecture. The development of SaulLM-54B and SaulLM-141B is guided by large-scale domain adaptation, divided into three strategies: (1) the exploitation of continued pretraining involving a base corpus that includes over 540 billion of legal tokens, (2) the implementation of a specialized legal instruction-following protocol, and (3) the alignment of model outputs with human preferences in legal interpretations. The integration of synthetically generated data in the second and third steps enhances the models' capabilities in interpreting and processing legal texts, effectively reaching state-of-the-art performance and outperforming previous open-source models on LegalBench-Instruct. This work explores the trade-offs involved in domain-specific adaptation at this scale, offering insights that may inform future studies on domain adaptation using strong decoder models. Building upon SaulLM-7B, this study refines the approach to produce an LLM better equipped for legal tasks. We are releasing base, instruct, and aligned versions on top of SaulLM-54B and SaulLM-141B under the MIT License to facilitate reuse and collaborative research.
Pierre Colombo, Telmo Pires, Malik Boudiaf, Rui Melo, Dominic Culver, Sofia Morgado, Etienne Malaboeuf, Gabriel Hautreux, Johanne Charpentier, Michael Desa
NeurIPS A* 2024
In this paper, we introduce SaulLM-54B and SaulLM-141B, two large language models (LLMs) tailored for the legal sector. These models, which feature architectures of 54 billion and 141 billion parameters, respectively, are based on the Mixtral architecture. The development of SaulLM-54B and SaulLM-141B is guided by large-scale domain adaptation, divided into three strategies: (1) the exploitation of continued pretraining involving a base corpus that includes over 540 billion of legal tokens, (2) the implementation of a specialized legal instruction-following protocol, and (3) the alignment of model outputs with human preferences in legal interpretations. The integration of synthetically generated data in the second and third steps enhances the models' capabilities in interpreting and processing legal texts, effectively reaching state-of-the-art performance and outperforming previous open-source models on LegalBench-Instruct. This work explores the trade-offs involved in domain-specific adaptation at this scale, offering insights that may inform future studies on domain adaptation using strong decoder models. Building upon SaulLM-7B, this study refines the approach to produce an LLM better equipped for legal tasks. We are releasing base, instruct, and aligned versions on top of SaulLM-54B and SaulLM-141B under the MIT License to facilitate reuse and collaborative research.

Pierre Colombo, Telmo Pessoa Pires, Malik Boudiaf, Dominic Culver, Rui Melo, Caio Corro, Andre F. T. Martins, Fabrizio Esposito, Vera Lúcia Raposo, Sofia Morgado, Michael Desa
arXiv 2024
In this paper, we introduce SaulLM-7B, a large language model (LLM) tailored for the legal domain. With 7 billion parameters, SaulLM-7B is the first LLM designed explicitly for legal text comprehension and generation. Leveraging the Mistral 7B architecture as its foundation, SaulLM-7B is trained on an English legal corpus of over 30 billion tokens. SaulLM-7B exhibits state-of-the-art proficiency in understanding and processing legal documents. Additionally, we present a novel instructional fine-tuning method that leverages legal datasets to further enhance SaulLM-7B's performance in legal tasks. SaulLM-7B is released under the MIT License.
Pierre Colombo, Telmo Pessoa Pires, Malik Boudiaf, Dominic Culver, Rui Melo, Caio Corro, Andre F. T. Martins, Fabrizio Esposito, Vera Lúcia Raposo, Sofia Morgado, Michael Desa
arXiv 2024
In this paper, we introduce SaulLM-7B, a large language model (LLM) tailored for the legal domain. With 7 billion parameters, SaulLM-7B is the first LLM designed explicitly for legal text comprehension and generation. Leveraging the Mistral 7B architecture as its foundation, SaulLM-7B is trained on an English legal corpus of over 30 billion tokens. SaulLM-7B exhibits state-of-the-art proficiency in understanding and processing legal documents. Additionally, we present a novel instructional fine-tuning method that leverages legal datasets to further enhance SaulLM-7B's performance in legal tasks. SaulLM-7B is released under the MIT License.

Rui Melo, Pedro A. Santos, João Dias
EPIA 2023
Many information retrieval systems use lexical approaches to retrieve information. Such approaches have multiple limitations, and these constraints are exacerbated when tied to specific domains, such as the legal one. Large language models, such as BERT, deeply understand a language and may overcome the limitations of older methodologies, such as BM25. This work investigated and developed a prototype of a Semantic Search System to assist the Supremo Tribunal de Justiça (Portuguese Supreme Court of Justice) in its decision-making process. We built a Semantic Search System that uses specially trained BERT models (Legal-BERTimbau variants) and a Hybrid Search System that incorporates both lexical and semantic techniques by combining the capabilities of BM25 and the potential of Legal-BERTimbau. In this context, we obtained a increase on the discovery metric when compared to BM25 for the first query result. This work also provides information on the most relevant techniques for training a Large Language Model adapted to Portuguese jurisprudence and introduces a new technique of Metadata Knowledge Distillation.
Rui Melo, Pedro A. Santos, João Dias
EPIA 2023
Many information retrieval systems use lexical approaches to retrieve information. Such approaches have multiple limitations, and these constraints are exacerbated when tied to specific domains, such as the legal one. Large language models, such as BERT, deeply understand a language and may overcome the limitations of older methodologies, such as BM25. This work investigated and developed a prototype of a Semantic Search System to assist the Supremo Tribunal de Justiça (Portuguese Supreme Court of Justice) in its decision-making process. We built a Semantic Search System that uses specially trained BERT models (Legal-BERTimbau variants) and a Hybrid Search System that incorporates both lexical and semantic techniques by combining the capabilities of BM25 and the potential of Legal-BERTimbau. In this context, we obtained a increase on the discovery metric when compared to BM25 for the first query result. This work also provides information on the most relevant techniques for training a Large Language Model adapted to Portuguese jurisprudence and introduces a new technique of Metadata Knowledge Distillation.