
虽然卷积神经网络对各种计算机视觉任务显示出了巨大的影响,但由于卷积操作的内在局部性,它们通常在明确建模远程依赖方面表现出局限性。Transformer最初是为自然语言处理任务而设计的,现在已经成为具有固有的全局自我注意机制的替代架构,以捕获长期的依赖关系。在本文中,我们提出了跨深度(TransDepth),一个架构,它受益于卷积神经网络和变压器。为了避免网络由于采用变压器而失去捕获局部级细节的能力,我们提出了一种基于门的注意机制的新型解码器。值得注意的是,这是第一篇将变压器应用到涉及连续标签的像素级预测问题(即单眼深度预测和表面法向估计)的论文。大量的实验表明,所提出的跨深度在三个具有挑战性的数据集上达到了最先进的性能。
在过去的十年里,卷积神经网络已经成为一种特殊的方法来处理基本的和具有挑战性的计算机视觉任务,需要密集的像素级预测,如语义分割[6,20]、单眼深度预测[38,17]和正常表面计算[41]。自从[26]的开创性工作以来,现有的深度预测模型一直由使用ResNet和VGG-Net等架构实现的编码器所主导。编码器逐步降低空间分辨率,并学习更多的概念与更大的接受域。因为上下文建模对于像素级预测至关重要,所以深度特征表示学习可以说是最关键的模型组件[5]。然而,深度预测网络要提高其建模全局上下文的能力仍然具有挑战性。传统上,在编码器中同时使用堆叠的卷积层和连续的下采样来生成足够大的深层接受域。这个问题通常在某种程度上被规避,而不是被解决。不幸的是,现有的策略带来了一些缺点:
(1)极深网的训练受到连续乘法冲刷低层次特征的影响;
(2)由于空间分辨率逐渐降低,因此对密集预测任务至关重要的局部信息被丢弃。
为了克服这些限制,最近提出了几种方法。一种解决方案是直接使用卷积操作,例如使用大的内核大小[40]、空洞卷积[5]和图像/特征金字塔[63]来操作卷积操作。另一种解决方案是将注意力模块集成到全卷积的网络架构中。该模块旨在建模特征图[54]中所有像素的全局交互。当应用于单眼深度预测[59,58]时,一般的方法是将注意模块与多尺度融合方法相结合。最近,Huynh等人。[30]提出了一个深度-注意量,以纳入一个非局部共面约束的网络。[25]等人依赖于一个固定的预先训练好的语义分割网络来指导全局表示学习。虽然这些方法的性能得到了显著的提高,但上述问题仍然存在。
变形金刚最初被用于在NLP任务中建模序列到序列预测,以获得更大的接受域,最近引起了计算机视觉界的极大兴趣。在[15]中提出了第一个纯基于自注意的视觉变压器(ViT),在ImageNet上获得了与卷积网络相比的良好结果。此外,SETR [64]用纯变压器代替了编码器,在城市景观数据集上获得了有竞争的结果。有趣的是,我们发现,由于在局部信息建模中缺乏空间归纳偏差,基于类SETR的纯变压器分割网络产生了不令人满意的性能。同时,以往大多数基于深度特征表示学习的方法都未能解决这一问题。如今,只有少数研究人员[3]正在考虑将cnn与变形金刚结合,创造一种混合结构。
与将像素级预测任务视为序列到序列的预测问题相比,我们首先提出将变压器嵌入到ResNet主干中,以建模语义像素依赖关系。此外,我们设计了一种新的有效的统一注意门解码器来解决纯线性变压器的嵌入特征在捕获局部表示时缺乏空间归纳偏差的缺点。我们的经验表明,我们的方法为模型设计提供了一个新的视角,并在几个具有挑战性的基准测试上取得了最先进的水平。


transformers
huggingface/transformers: 是一个基于 Python 的自然语言处理库,它使用了 PostgreSQL 数据库存储数据。适合用于自然语言处理任务的开发和实现,特别是对于需要使用 Python 和 PostgreSQL 数据库的场景。特点是自然语言处理库、Python、PostgreSQL 数据库。
项目地址:https://gitcode.com/gh_mirrors/tra/transformers
如前所述,我们的工作旨在通过添加Transformer层和通过注意门解码器增强学习表示来解决有限的感受域。
如图1所示,不同于之前的工作将输入图像I(HW3)变形成一系列拉直的2Dpatch(Ip,N*(p2·3)),本文提出一个混合模型。如图1所示,输入序列来自resnet backbone。然后将补丁嵌入应用于从CNN的最终特征输出中提取的补丁。这个补丁嵌入的核大小应该是p×p,这意味着输入序列是通过简单地将特征图的空间维数扁平化并投影到变压器维数来获得的。在这种情况下,我们还删除了位置嵌入,因为缺少了原始的物理意义。第一变压器层的输入计算如下:







huggingface/transformers: 是一个基于 Python 的自然语言处理库,它使用了 PostgreSQL 数据库存储数据。适合用于自然语言处理任务的开发和实现,特别是对于需要使用 Python 和 PostgreSQL 数据库的场景。特点是自然语言处理库、Python、PostgreSQL 数据库。
最近提交(Master分支:7 个月前 )
d1b92369 - 1 天前
25b7f272
* remove one of the last deps
* update fast image processor after refactor
* styling
* more quality of life improvements
* nit
* update
* cleanups
* some cleanups
* vllm updates
* update fake image token
* [convert] Fix typo
* [convert] Strip extraneous bytes from shards
* [convert] Minor fixes
* [convert] Use num_experts
* multi-image fixes in modeling + processor
* fixup size
* 128 experts
* Use default rope
* Unfuse mlp
* simplify a lot inputs embeds merging
* remove .item() :eyes:
* fix from review
* Address feedback
* Use None "default" for rope_scaling. Add eot.
* set seed
* return aspect ratios and bug fixes
* Moe 128 rebased (#8)
* 128 experts
* Use default rope
* Unfuse mlp
* Address feedback
* Use None "default" for rope_scaling. Add eot.
* Meta/llama quant compat (#7)
* add quant compatible model & conversion code for llama4
* fix a few issues
* fix a few issues
* minor type mapping fix
---------
Co-authored-by: Lu Fang <fanglu@fb.com>
* use a new config parameter to determine which model definition to use for MoE
---------
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Lu Fang <fanglu@fb.com>
* un-comment write_tokenizer from converting script
* remove un-used imports
* [llama4] Pop aspect_ratios from image processor output in Llama4Processor
Signed-off-by: Jon Swenson <jmswen@gmail.com>
* Fix parameter_count name
* Update src/transformers/models/llama4/configuration_llama4.py
* nit
* Add changes for no_rope, moe_layers, chunked attention. Just need to test all
* Update src/transformers/models/llama4/image_processing_llama4_fast.py
* nit
* fix post merge with main
* support flex attention
* fixes
* fix
* add layer
* small updates
* rebase and delete llm_compressor
* nit
* [llama4/mm] Add back <|image|> token that delimits global tile
* [llama4/mm] Fix Llama 4 image processing unit tests
* add explicit dtype
Signed-off-by: Jon Swenson <jmswen@gmail.com>
* sdpa works
* comment todo small
* fix model loading
Signed-off-by: Zijing Liu <liuzijing2014@gmail.com>
* revert
* nits
* small fix for TP on 1 node
* Read new params from config
* Add <|eom|>
* lol don't know how this got here
* adding fp8
* Save processor, fix chat template
* style
* Add boi/eoi tokens
We don't use them.
* fixes for now flex seems to work :)
* updates
* nits
* updates
* missking keys
* add context parallel
* update
* update
* fix
* nits
* add worldsize and make eager attn work for vision
* Ignore new key present in base models
* add tp_plan
* fix nope
Signed-off-by: Zijing Liu <liuzijing2014@gmail.com>
* minor fix
Signed-off-by: Zijing Liu <liuzijing2014@gmail.com>
* Clean up Llama4 vision model
* current updates
* add support for `attn_temperature_tuning`
* add floor scale
* add missing attn scales
* push what works, dirty trick for the device synch
* oups
* Fix pad_token_id
See
https://huggingface.co/ll-re/Llama-4-Scout-17B-16E/discussions/2/files
Confirmed in the original codebase.
* fix causallml loading
* rm
* fix tied-weights
* fix sdpa
* push current version
* should work with both short and long
* add compressed_tensos & fix fbgemm tp
* Fix flex impl
* style
* chunking
* try to revert the potentially breaking change
* fix auto factory
* fix shapes in general
* rm processing
* commit cache utils cleanup
* Fix context length
* fix
* allocate
* update tp_plan
* fix SDPA!
* Add support for sparse `Llama4TextMoe` layer from the kernel hub
* cleanup
* better merge
* update
* still broken fixing now
* nits
* revert print
* Write max_position_embeddings and max_model_length
* Update modeling_llama4.py
* Save attention_chunk_size
* Sync eos terminators
* Read initializer_range
* style
* remove `dict`
* fix
* eager should use `chunked_attention_mask`
* revert
* fixup
* fix config
* Revert "Merge pull request #36 from huggingface/sparse-llama4-moe"
This reverts commit ccda19f050867dd42ea143c5de60f3dec81375f0, reversing
changes made to a515579aed8c0fe9bf529b6c40446a289406d5d6.
* Fix typo and remove warning with compiled flex and chunked prefill
* Fix MoE vs FF (#41)
* fix
* Use correct no_rope_layers if provided one is empty list
* update tests
* fix
* skipping some tests
* fix fp8 loading
Signed-off-by: Zijing Liu <liuzijing2014@gmail.com>
* fix text geneartion pipeline
Signed-off-by: Zijing Liu <liuzijing2014@gmail.com>
* eager needs 4D mask
* fix
* Some cleanup
* fix
* update
* fix
* replace correctly module
* patch
* modulelist
* update
* update
* clean up
* Don't move to `cuda:0` in distributed mode
* restrict to compressed tensors for now
* rm print
* Docs!
* Fixes
* Update docs/source/en/model_doc/llama4.md
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Fixes
* cuda graph fix
* revert some stuff
* fixup
* styling
* Update src/transformers/models/llama4/modeling_llama4.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* fixup
* commit licence, cleanup here and there and style
* more styling changes
* fix dummies
* fix and clean docstrings
* remove comment
* remove warning
* Only fast image processor is supported
* nit
* trigger CI
* fix issue with flex encoder
* fix dynamic cache
* Code quality
* Code quality
* fix more tests for now
* Code quality
* Code quality
* Nuke bunch of failing stuff
* Code quality
* Code quality
* cleanup removal of slow image processor
* ruff fix fast image processor
* fix
* fix styling
* Docs
* Repo consistency
* Repo consistency
* fix sliding window issue
* separate llama cache
* styling
* Repo consistency
* Repo consistency
* push waht works
* L4 Repo consistency
* Docs
* fix last last alst alst alst alstsaltlsltlaslt
---------
Signed-off-by: Jon Swenson <jmswen@gmail.com>
Signed-off-by: Zijing Liu <liuzijing2014@gmail.com>
Co-authored-by: yonigozlan <yoni.gozlan10@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Pablo Montalvo <pablo.montalvo.leroux@gmail.com>
Co-authored-by: Pablo Montalvo <39954772+molbap@users.noreply.github.com>
Co-authored-by: Keyun Tong <tongkeyun@gmail.com>
Co-authored-by: Zijing Liu <liuzijing2014@users.noreply.github.com>
Co-authored-by: Lu Fang <fanglu@fb.com>
Co-authored-by: Zijing Liu <liuzijing2014@gmail.com>
Co-authored-by: Jon Swenson <jmswen@gmail.com>
Co-authored-by: jmswen <jmswen@users.noreply.github.com>
Co-authored-by: MekkCyber <mekk.cyber@gmail.com>
Co-authored-by: Mohamed Mekkouri <93391238+MekkCyber@users.noreply.github.com>
Co-authored-by: Mohit Sharma <mohit21sharma.ms@gmail.com>
Co-authored-by: Yong Hoon Shin <yhshin@meta.com>
Co-authored-by: Marc Sun <marc@huggingface.co>
Co-authored-by: drisspg <drisspguessous@gmail.com>
Co-authored-by: Cyril Vallez <cyril.vallez@gmail.com>
Co-authored-by: Daniël de Kok <me@danieldk.eu>
Co-authored-by: Lysandre <hi@lysand.re>
Co-authored-by: Ye (Charlotte) Qi <ye.charlotte.qi@gmail.com>
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com> 1 天前
所有评论(0)