stat940W25-presentation: Difference between revisions

From statwiki
Jump to navigation Jump to search
m (→‎Group 8 Presentation:: formatting changes)
m (→‎Group 9 Presentation:: Added group 9)
Line 65: Line 65:


- Stage2: Train both the backbone model and the medusa heads together
- Stage2: Train both the backbone model and the medusa heads together
</div>
<div style="border: 2px solid #0073e6; background-color: #f0f8ff; padding: 10px; margin: 10px 0; border-radius: 5px;">
== Group 9 Presentation:  ==
=== Presented by: ===
Editing in progress
=== Paper Citation ===
Editing in progress
=== Background ===
Editing in progress
=== Technical Contributions ===
Editing in progress


</div>
</div>

Revision as of 12:39, 24 March 2025


Notes on Presentations

Group 1 Presentation:

Paper Citation

Background

Paper Contributions


Group 8 Presentation:

Presented by:

- Nana Ye

- Xingjian Zhou

Paper Citation

T. Cai et al., “Medusa: Simple LLM Inference Acceleration Framework with Multiple Decoding Heads,” 2024, arXiv. doi: 10.48550/ARXIV.2401.10774.

Background

- As the size of LLMs grow, the speed at which they can generate tokens decreses. The bottleneck is primairly the transfer of data to/from the GPU

- Speculative Sampling is an existing solution that predicts multiple tokens in the future at once using smaller "draft" models

- Medusa instead solves this problem by adding multiple decoding heads and a tree based attention mechanism to existing LLMS

- Paper discusses the implementations of Medusa1 and Medusa2

Technical Contributions

Medusa1:

- Uses a frozed pre-trained LLM and trains extra decoding heads on top

- Each additional decoding head predicts a token K time steps in the future

- Uses a probability loss function that scales based on the number of steps into the future

- Reduces memory usage because the backbone model is only used for hidden state extraction

Medusa2:

- Fine tunes the LLM and trains the decoding heads at the same time.

- Encountered problems with high losses, switched to a two-stage training process:


- Stage1: train only the Medusa heads (simillar to Medusa1)

- Stage2: Train both the backbone model and the medusa heads together


Group 9 Presentation:

Presented by:

Editing in progress

Paper Citation

Editing in progress

Background

Editing in progress

Technical Contributions

Editing in progress

Group 23 Presentation: Discrete Diffusion Modelling By Estimating the Ratios of the Data Distribution

Paper Citation

A. Lou, C. Meng, and S. Ermon, ‘Discrete Diffusion Modeling by Estimating the Ratios of the Data Distribution’, Jun. 06, 2024, arXiv: arXiv:2310.16834. doi: 10.48550/arXiv.2310.16834.

https://arxiv.org/abs/2310.16834

Background

Paper Contributions

Group 24 Presentation: Mitigating the Missing Fragmentation Problem in De Novo Peptide Sequencing With A Two-Stage Graph-Based Deep Learning Model

Paper Citation

Mao, Z., Zhang, R., Xin, L. et al. Mitigating the missing-fragmentation problem in de novo peptide sequencing with a two-stage graph-based deep learning model. Nat Mach Intell 5, 1250–1260 (2023). https://doi.org/10.1038/s42256-023-00738-x

https://www.nature.com/articles/s42256-023-00738-x#citeas

Background

- Proteins are crucial for biological functions

- Proteins are formed from peptides which are sequences of amino acids

- Mass spectrometry is used to analyze peptide sequences

- De Novo sequencing is used to piece together peptide sequences when the sequences are missing from existing established protein databases

- Deep learning has become commonly implimented to solve the problem of de-novo peptide sequencing

- When a peptide fails to fragment in the expected manner, it can make protein reconstruction difficult due to missing data

- One error in the protein can propogate to errors throughout the entire sequence

Paper Contributions

- Graph Novo was developed to handle incomplete segments

- GraphNovo-PathSearcher instead of directly predicting, does a path search method to predict the next peptide in a sequence

- A graph neural network is used to find the best path from the graph generated from the mass spectrometry input

- GraphNovo-SeqFiller instead of directly predicting, does a path search method to predict the next peptide in a sequence.

- It's expected that some peptides/ amino acids may have been missed, SeqFiller uses a transformer to add in amino acids which have been missed from PathSearcher

- Input is mass spectrum from mass spectrometry

- Graph construction is done where nodes represent possible fragments, and edges represent possible peptides (PathSearcher module)

- PathSearcher uses machine learning to find the optimal path on the generated graph

- SeqFiller fills in missing amino acids that may have not been included in the PathSearcher module due to lacking data from the mass spectrometry inputs