-
Guangdong University of Technology
- Guangzhou, China
-
13:27
(UTC +08:00) - shilangchen1011.github.io
- https://orcid.org/0000-0002-0891-5956
Highlights
- Pro
Block or Report
Block or report ShilangChen1011
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abuseStars
Language
Sort by: Recently starred
A computationally efficient and robust LiDAR-inertial odometry (LIO) package
Some simple Blender scripts for rendering paper figures
ROS Wrapper for Intel(R) RealSense(TM) Cameras
OpenAI 接口管理 & 分发系统,支持 Azure、Anthropic Claude、Google PaLM 2 & Gemini、智谱 ChatGLM、百度文心一言、讯飞星火认知、阿里通义千问、360 智脑以及腾讯混元,可用于二次分发管理 key,仅单可执行文件,已打包好 Docker 镜像,一键部署,开箱即用. OpenAI key management & redistributi…
基于One API的二次开发版本,支持Midjourney,仅供个人管理渠道使用,请勿用于商业API分发!
Official code and checkpoint release for mobile robot foundation models: GNM, ViNT, and NoMaD.
MambaOut: Do We Really Need Mamba for Vision?
A fast and robust global registration library for outdoor LiDAR point clouds.
"Describing Textures using Natural Language" code and data, ECCV 2020 Oral.
[CVPR 24'] Benchmarking Implicit Neural Representation and Geometric Rendering in Real-Time RGB-D SLAM
[CVinW | ECCV 2022] How well does CLIP understand texture?
Python implementation of "Adaptive Sequential Bayesian Change Point Detection" algorithm (Turner, et.al)
nikolaradulov / SLAMFuse
Forked from pamela-project/slambenchSLAM performance evaluation framework
Python implementation of Bayesian online changepoint detection
✨✨Latest Papers and Datasets on Multimodal Large Language Models, and Their Evaluation.
Methods to get the probability of a changepoint in a time series.
This repository contains a reading list of papers on Time Series Segmentation. This repository is still being continuously improved.
PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
🦦 Otter, a multi-modal model based on OpenFlamingo (open-sourced version of DeepMind's Flamingo), trained on MIMIC-IT and showcasing improved instruction-following and in-context learning ability.
Official repo for "Mini-Gemini: Mining the Potential of Multi-modality Vision Language Models"
[CVPR2024 Highlight][VideoChatGPT] ChatGPT with video understanding! And many more supported LMs such as miniGPT4, StableLM, and MOSS.
ChatGLM3 series: Open Bilingual Chat LLMs | 开源双语对话语言模型