Rui Melo
Logo PhD Student at CMU and University of Porto

Rui Melo is a PhD student at University of Porto, Portugal, researching the intersection of Machine Learning and Software Engineering. He holds an MSc from IST and previously worked as an AI Engineer at a U.S. legal-tech startup. His research focuses on enhancing code generation through adversarial ML and mechanistic interpretability.


Education
  • Carnegie Mellon University
    Carnegie Mellon University
    Ph.D. Student
    Aug. 2025 - present
  • University of Porto
    University of Porto
    Ph.D. Student
    Sep. 2024 - present
  • Instituto Superior Técnico
    Instituto Superior Técnico
    Bologna Master Degree in Information Systems and Computer Engineering
    Sep. 2020 - Jun. 2023
  • University of Aveiro
    University of Aveiro
    Bachelor's degree in Computer Software Engineering
    Sep. 2017 - Aug. 2020
Experience
  • Equall
    Equall
    AI Engineer
    May. 2023 - Jul 2024
  • INESC-ID
    INESC-ID
    NLP Researcher
    Jan. 2022 - Sep 2023
  • Imaginary Cloud
    Imaginary Cloud
    Data Scientist
    Jun. 2022 - May 2023
Honors & Awards
  • 2nd Place Aveiro Tech City Hackathon - 4th Challenge
    2024
  • 3rd Place Aveiro Tech City Hackathon - 2nd Challenge
    2023
  • 1st Place Aveiro Tech City Hackathon - 2nd Challenge
    2022
  • 1st Place Hackacity Porto
    2022
Licenses and Certifications
  • 2025 Summer School for Generative AI
    2025 Summer School for Generative AI
    Tsinghua University
    Jul. 2025
  • ELLIS Winter School 2025 on Foundation Models - Unit Amsterdam
    ELLIS Winter School 2025 on Foundation Models - Unit Amsterdam
    ELLIS
    Mar. 2025
  • Lisbon Machine Learning School – LxMLS 2023
    Lisbon Machine Learning School – LxMLS 2023
    Instituto Superior Técnico
    Jul. 2023
Grants
  • High Performance Computing (HPC) Grant
    High Performance Computing (HPC) Grant
    Fundação para a Ciência e Tecnologia (FCT)
    Sep. 2025 - Aug. 2026
    (Awarded 4048 GPU hours by the Fundação para a Ciência e Tecnologia (FCT) for high-performance computing resources.)
  • CMU-Portugal Dual PhD Scholarship
    CMU-Portugal Dual PhD Scholarship
    CMU-Portugal Program
    Aug. 2025 - Aug. 2030
    (Awarded by the CMU-Portugal Program for a dual PhD at Carnegie Mellon University and the University of Porto.)
  • Google Cloud Research Credits
    Google Cloud Research Credits
    Google Cloud Platform
    Feb. 2025 - Jan. 2026
    (Awarded 1000 USD by Google Cloud Platform)
  • PhD Research Scholarship
    PhD Research Scholarship
    Center for Responsible AI
    Sep. 2024 - Aug. 2025
    (Awarded by the Center for Responsible AI at the University of Porto and Fraunhofer AICOS Portugal.)
News
2025
Tsinghua University Summer School 2025
Jul 16
FSE/ISSTA Joint Doctoral Symposium 2025
Jun 21
Ellis Winter School 2025
Feb 28
Selected Publications (view all )
SaulLM-54B & SaulLM-141B: Scaling Up Domain Adaptation for the Legal Domain
SaulLM-54B & SaulLM-141B: Scaling Up Domain Adaptation for the Legal Domain

Pierre Colombo, Telmo Pires, Malik Boudiaf, Rui Melo, Dominic Culver, Sofia Morgado, Etienne Malaboeuf, Gabriel Hautreux, Johanne Charpentier, Michael Desa

NeurIPS A* 2024

In this paper, we introduce SaulLM-54B and SaulLM-141B, two large language models (LLMs) tailored for the legal sector. These models, which feature architectures of 54 billion and 141 billion parameters, respectively, are based on the Mixtral architecture. The development of SaulLM-54B and SaulLM-141B is guided by large-scale domain adaptation, divided into three strategies: (1) the exploitation of continued pretraining involving a base corpus that includes over 540 billion of legal tokens, (2) the implementation of a specialized legal instruction-following protocol, and (3) the alignment of model outputs with human preferences in legal interpretations. The integration of synthetically generated data in the second and third steps enhances the models' capabilities in interpreting and processing legal texts, effectively reaching state-of-the-art performance and outperforming previous open-source models on LegalBench-Instruct. This work explores the trade-offs involved in domain-specific adaptation at this scale, offering insights that may inform future studies on domain adaptation using strong decoder models. Building upon SaulLM-7B, this study refines the approach to produce an LLM better equipped for legal tasks. We are releasing base, instruct, and aligned versions on top of SaulLM-54B and SaulLM-141B under the MIT License to facilitate reuse and collaborative research.

SaulLM-54B & SaulLM-141B: Scaling Up Domain Adaptation for the Legal Domain

Pierre Colombo, Telmo Pires, Malik Boudiaf, Rui Melo, Dominic Culver, Sofia Morgado, Etienne Malaboeuf, Gabriel Hautreux, Johanne Charpentier, Michael Desa

NeurIPS A* 2024

In this paper, we introduce SaulLM-54B and SaulLM-141B, two large language models (LLMs) tailored for the legal sector. These models, which feature architectures of 54 billion and 141 billion parameters, respectively, are based on the Mixtral architecture. The development of SaulLM-54B and SaulLM-141B is guided by large-scale domain adaptation, divided into three strategies: (1) the exploitation of continued pretraining involving a base corpus that includes over 540 billion of legal tokens, (2) the implementation of a specialized legal instruction-following protocol, and (3) the alignment of model outputs with human preferences in legal interpretations. The integration of synthetically generated data in the second and third steps enhances the models' capabilities in interpreting and processing legal texts, effectively reaching state-of-the-art performance and outperforming previous open-source models on LegalBench-Instruct. This work explores the trade-offs involved in domain-specific adaptation at this scale, offering insights that may inform future studies on domain adaptation using strong decoder models. Building upon SaulLM-7B, this study refines the approach to produce an LLM better equipped for legal tasks. We are releasing base, instruct, and aligned versions on top of SaulLM-54B and SaulLM-141B under the MIT License to facilitate reuse and collaborative research.

SaulLM-7B: A pioneering Large Language Model for Law
SaulLM-7B: A pioneering Large Language Model for Law

Pierre Colombo, Telmo Pessoa Pires, Malik Boudiaf, Dominic Culver, Rui Melo, Caio Corro, Andre F. T. Martins, Fabrizio Esposito, Vera Lúcia Raposo, Sofia Morgado, Michael Desa

arXiv 2024

In this paper, we introduce SaulLM-7B, a large language model (LLM) tailored for the legal domain. With 7 billion parameters, SaulLM-7B is the first LLM designed explicitly for legal text comprehension and generation. Leveraging the Mistral 7B architecture as its foundation, SaulLM-7B is trained on an English legal corpus of over 30 billion tokens. SaulLM-7B exhibits state-of-the-art proficiency in understanding and processing legal documents. Additionally, we present a novel instructional fine-tuning method that leverages legal datasets to further enhance SaulLM-7B's performance in legal tasks. SaulLM-7B is released under the MIT License.

SaulLM-7B: A pioneering Large Language Model for Law

Pierre Colombo, Telmo Pessoa Pires, Malik Boudiaf, Dominic Culver, Rui Melo, Caio Corro, Andre F. T. Martins, Fabrizio Esposito, Vera Lúcia Raposo, Sofia Morgado, Michael Desa

arXiv 2024

In this paper, we introduce SaulLM-7B, a large language model (LLM) tailored for the legal domain. With 7 billion parameters, SaulLM-7B is the first LLM designed explicitly for legal text comprehension and generation. Leveraging the Mistral 7B architecture as its foundation, SaulLM-7B is trained on an English legal corpus of over 30 billion tokens. SaulLM-7B exhibits state-of-the-art proficiency in understanding and processing legal documents. Additionally, we present a novel instructional fine-tuning method that leverages legal datasets to further enhance SaulLM-7B's performance in legal tasks. SaulLM-7B is released under the MIT License.

A Semantic Search System for the Supremo Tribunal de Justiça
A Semantic Search System for the Supremo Tribunal de Justiça

Rui Melo, Pedro A. Santos, João Dias

EPIA 2023

Many information retrieval systems use lexical approaches to retrieve information. Such approaches have multiple limitations, and these constraints are exacerbated when tied to specific domains, such as the legal one. Large language models, such as BERT, deeply understand a language and may overcome the limitations of older methodologies, such as BM25. This work investigated and developed a prototype of a Semantic Search System to assist the Supremo Tribunal de Justiça (Portuguese Supreme Court of Justice) in its decision-making process. We built a Semantic Search System that uses specially trained BERT models (Legal-BERTimbau variants) and a Hybrid Search System that incorporates both lexical and semantic techniques by combining the capabilities of BM25 and the potential of Legal-BERTimbau. In this context, we obtained a increase on the discovery metric when compared to BM25 for the first query result. This work also provides information on the most relevant techniques for training a Large Language Model adapted to Portuguese jurisprudence and introduces a new technique of Metadata Knowledge Distillation.

A Semantic Search System for the Supremo Tribunal de Justiça

Rui Melo, Pedro A. Santos, João Dias

EPIA 2023

Many information retrieval systems use lexical approaches to retrieve information. Such approaches have multiple limitations, and these constraints are exacerbated when tied to specific domains, such as the legal one. Large language models, such as BERT, deeply understand a language and may overcome the limitations of older methodologies, such as BM25. This work investigated and developed a prototype of a Semantic Search System to assist the Supremo Tribunal de Justiça (Portuguese Supreme Court of Justice) in its decision-making process. We built a Semantic Search System that uses specially trained BERT models (Legal-BERTimbau variants) and a Hybrid Search System that incorporates both lexical and semantic techniques by combining the capabilities of BM25 and the potential of Legal-BERTimbau. In this context, we obtained a increase on the discovery metric when compared to BM25 for the first query result. This work also provides information on the most relevant techniques for training a Large Language Model adapted to Portuguese jurisprudence and introduces a new technique of Metadata Knowledge Distillation.

All publications