All tags

TAG

#llm

8articles

🤖Exposing vLLM on mdx.jp Through Cloudflare Tunnel as an OpenAI-Compatible API

How I exposed a vLLM server running on mdx.jp through Cloudflare Tunnel and used it as an OpenAI-compatible API, including the practical pitfalls

cloudflaretunnelzero-trustvllm

🚀Serving LLM-jp-4 32B Thinking on mdx.jp A100 x2 with vLLM and Using It via an OpenAI-Compatible API

Notes from running the official LLM-jp-4-32b-a3b-thinking model on an mdx.jp A100 40GB x2 server and switching from a Transformers OOM to a vLLM deployment

aillmgpuvllm

🧪Running LLM-jp-4 Locally on a MacBook Pro M4 Max 128GB with Ollama’s OpenAI-Compatible API

Notes and measurements from running LLM-jp-4 8B locally on a MacBook Pro M4 Max 128GB and exposing it through Ollama’s OpenAI-compatible API

aillmmacollama

📚Building an NDC Book Classifier with LoRA: Fine-Tuning a Japanese LLM on Library Data

A hands-on tutorial on collecting bibliographic data from the National Diet Library Search API and fine-tuning a 1.8B Japanese LLM with LoRA to classify books by their NDC (Nippon Decimal Classification) category from title alone.

llmlorapythonnlp

📝Azure OpenAI GPT-4 vs Document Intelligence: Comparative Evaluation of Japanese Vertical Text OCR

Azure OpenAI GPT-4 vs Document Intelligence: Comparative Evaluation of Japanese Vertical Text OCR

azureocrllm

🐈LLM-Based Manuscript Paper OCR Performance Comparison: Verification of Vertical Japanese Recognition Accuracy

LLM-Based Manuscript Paper OCR Performance Comparison: Verification of Vertical Japanese Recognition Accuracy

ocrllm

😺Notes on LLM-Related Tools

Notes on LLM-Related Tools

pythonllmllamaindexollama

🔥Running a Local LLM Using mdx.jp 1GPU Pack and Ollama

Running a Local LLM Using mdx.jp 1GPU Pack and Ollama

llmllamaollamamdxjp