UX-Decoder/Segment-Everything-Everywhere-All-At-Once

[NeurIPS 2023] Official implementation of the paper "Segment Everything Everywhere All at Once"

PythonCudaOther
This is stars and forks stats for /UX-Decoder/Segment-Everything-Everywhere-All-At-Once repository. As of 07 May, 2024 this repository has 3323 stars and 178 forks.

👀SEEM: Segment Everything Everywhere All at Once 🍇 [Read our arXiv Paper]   🍎 [Try our Demo] We introduce SEEM that can Segment Everything Everywhere with Multi-modal prompts all at once. SEEM allows users to easily segment an image using prompts of different types including visual prompts (points, marks, boxes, scribbles and image segments) and language prompts (text and audio), etc. It can also work with any combination of prompts or generalize to custom prompts! by Xueyan Zou*, Jianwei Yang*,...
Read on GithubGithub Stats Page
repotechsstarsweeklyforksweekly
GeniusVentures/ed25519AssemblyCCMake0000
dvcirilo/alarme_ifrnOther2000
Mastersam07/smartyDartOther890350
mazab99/flutter_ui_screensDartC++CMake500140
lamarios/clipiousDartC++CMake3580170
parisnakitakejser/video-tutorial-dockerDockerfileShellJavaScript490950
helium/helium-packet-routerErlangOther7000
acheong08/ChatGPT-to-APIGoDockerfilePython1k+4253+3
thomast1906/thomasthorntoncloud-examplesHCLHTMLC#17402540
lewangdev/shanghai-lockdown-covid-19HTMLOther1590180