https://www.bnext.com.tw/article/84876/chatgpt-3-word-rule-7-prompts善用「3字規則+7組萬用提示詞」一個極為簡單的「3字規則」(3-word rule):在你的提問句尾,加上「like a role」(像個某某角色一樣),能有效地指導 AI 聊天機器人調整語氣、內容深度與格式,提供更具思考性、相關性和專業水準的回覆 。 原本提示:「Summarize this article」(摘要這篇文章) 調整後提示:「Summarize this article like a journalist」(像個記者一樣幫我摘要這篇文章) 原本提示:「Give me feedback on my resume」(給我履歷建議) 調整後提示:「Give me feedback on my resume like a hiring manager」(像個人資主管一樣給我履歷建議) Like a teacher:用於將複雜主題拆解成簡單、循序漸進的課程。 Like a therapist:用於以冷靜、富同理心的方式處理情緒或棘手對話的策略。 Like a coach:用於設定目標、制定每週計畫,或提供激勵人心的建議。 Like a nutritionist:用於提供專注於營養均衡、價格實惠且準備簡單的每週食譜。 Like a project manager:用於以清晰好懂的優先順序、截止日期來組織任務清單。 | 5 個 prompt 做決策、盯專案、回顧會議重點 1. 提前預知會議重點:讓 AI 幫你搞懂對方在想什麼 Prompt:「根據我與[/person]先前的互動,列出對方在我們下次會議中最可能關注的 5 件事。」(Based on my prior interactions with [/person], give me 5 things likely top of mind for our next meeting.) 2. 一鍵生成專案報告:告別繁瑣的彙整工作 Prompt:「根據郵件、聊天紀錄和[/series]中的所有會議,撰寫一份專案更新報告:包括 KPI 與目標的差距、成敗分析、風險點、競爭對手動態,以及可能被問到的尖銳問題與建議回答。」(Draft a project update based on emails, chats, and all meetings in [/series]: KPIs vs. targets, wins/losses, risks, competitive moves, plus likely tough questions and answers.) 3. 專案進度量化分析:用數據看清成功機率 Prompt:「我們是否準備好在 11 月推出[Product]?請檢查工程進度、試點計畫成果和潛在風險,並提供一個成功機率。」(Are we on track for the [Product] launch in November? Check eng progress, pilot program results, risks. Give me a probability.) 4. 時間分析:搞懂自己的時間都花去哪了 Prompt:「請分析我過去一個月的行事曆與電子郵件,整理出 5 到 7 類我花最多時間處理的專案,列出各自的時間占比與簡短說明。」(Review my calendar and email from the last month and create 5 to 7 buckets for projects I spend most time on, with % of time spent and short descriptions.) 5. 會議準備神器:自動回顧討論脈絡 Prompt:「請根據我選取的電子郵件內容,以及過去主管與團隊的討論,幫我準備接下來的[/series]會議。」(Review [/select email] + prep me for the next meeting in [/series], based on past manager and team discussions.) | 如何讓ChatGPT成為「影子寫手」,寫出專屬於你的「味道」 深津式泛用Prompt#Instructions(你想要它扮演的角色) 你是專業的編輯。根據以下規範和輸入的句子來輸出最佳摘要。 #Constraints(你想要的文章樣貌) 字符數約為300個字符。小學生也能輕鬆理解。保持句子簡潔。 指令:文章內容必須包含:A、B、C。 這裡的ABC除了可以填入你想要文章包含的元素以外,也可以填入一些你想要ChatGPT了解的背景資料,讓它能根據這些資料撰寫,而不是隨口亂編。 如果你想要讓這篇文章的架構更縝密,你可以先大致分配文章的段落和各個段落的重點,並將指令改為:請將文章分成五段,這五段的內容分別為 A、B、C、D、E,並照順序寫作。 #Input(你想要修改 / 摘錄的文字) (填入想要摘錄的文本) #output Line採用了名為CO-STAR的提示詞撰寫架構,來切分提示任務。這幾個字母分別代表不同要素,比如C是指情境(Context),也就是在提示中描述任務概況、賦予LLM角色;O則指目標(Objective),即在提示中告知LLM想實現的目標,像是「給出案件類別,並總結案件始末」。 再來是S,也就是回覆風格(Style),比如告訴LLM,要以客服身分回答問題。T則是語調(Tone),可以在提示中,要求LLM以溫柔的語調回覆。A則指受眾(Audience)目標,R是輸出的格式(Response),比如「將標籤和案件解釋區分開來」這類描述。 |
建議增加WFH的作法以及管理等,包含數位工具的搭配應用等等 NO CODE AI的瞭解和體驗 | 透過評鑑分析了解本身的職能現況以及人格特質,而針對有待加強的地方,則可經由培訓過程學習到的系統性方法,制訂可操作的改善計畫,搭配相關指標的驗證,持續練習以成為更好的領導人,從而如期如質完成單位交付的工作。 | 以下為示範: #Instructions:你是一名作家,請根據constraints的規範寫一篇文章 #Constraints:字符數約為800個字符。小學生也能輕鬆理解。保持句子簡潔。本篇文章的主題為:信仰對於我的意義,內容請聚焦在我成為基督徒後的心態轉變。內容必須包含:我小學五年級的受洗經驗、曾經討厭每周日的無聊聚會、大學時遇到課業壓力和人際壓力、某次禮拜時牧師和晤談後不在迷惘、感受到神的恩典。文章中請加入以下的字詞:曾經、我以為、最後、或許、就在這時。文章中請加入這句名言佳句:藐視鄰舍的,毫無智慧,明哲人卻靜默不言。 #output: |
https://analyticsindiamag.com/top-5-llm-benchmarks/You don’t need hosted LLMs, do you?https://betterprogramming.pub/you-dont-need-hosted-llms-do-you-1160b2520526 https://www.vellum.ai/blog/should-i-use-prompting-rag-or-fine-tuning Afaque Umer https://ai.plainenglish.io/%EF%B8%8F-langchain-streamlit-llama-bringing-conversational-ai-to-your-local-machine-a1736252b172 https://github.com/afaqueumer/DocQA https://huggingface.co/TheBloke/LLaMa-7B-GGML 1.Visual Studio with C++ 2.https://github.com/abetlen/llama-cpp-python https://artificialcorner.com/answering-question-about-your-documents-using-langchain-and-not-openai-2f75b8d639ae https://artificialcorner.com/lamini-is-here-a-little-giant-llm-on-your-cpu-8af30ff5a7c2 https://towardsdatascience.com/distributed-llama-2-on-cpus-via-llama-cpp-pyspark-65736e9f466d https://simonwillison.net/2023/Aug/3/weird-world-of-llms/ https://huggingface.co/TheBloke/airoboros-l2-7b-gpt4-1.4.1-GGML/tree/main | 人工智慧大語言模型相關技術用於環化業務的可行性研究 先進人工智慧技術應用於電業水處理程序的可行性探討 usage:tech review, coding assissting, process alert, report QA, ViT, DETR AI, LLM, Transformer, Attention, Llama cpp python langchain Abby Morgan https://www.comet.com/site/blog/explainable-ai-for-transformers/ langchain說明: https://python.langchain.com/docs/integrations/llms/llamacpp https://ai.plainenglish.io/from-idea-to-reality-creating-llm-powered-apps-with-langchain-a0317a23590d LLM排行榜: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard 微軟導入ChatGPT和機器人互動 我們離賈維斯不遠了!https://www.cool3c.com/article/197069 https://agi-sphere.com/llama-2/ https://onpassive.com/blog/top-5-benefits-of-robotic-process-automation/ https://bdtechtalks.com/2023/08/14/llm-api-server-nocode/
| 1.https://replicate.com/blog/how-to-prompt-llama 2.https://huggingface.co/blog/stackllama(StackLLaMA: A hands-on guide to train LLaMA with RLHF) 3.https://medium.com/@mikeyoung_97230/harnessing-the-power-of-llama-v2-for-chat-applications-9b0c7597a9fa 4.https://notes.aimodels.fyi/building-a-customer-support-chatbot-with-langchain-and-deepinfra-a-step-by-step-guide/ llama cpp=ggml inference time:7b=60 sec;13b=120 sec; chat2model 1.Nous-Hermes-13b-Chinese.ggmlv3.q4_0.bin 2.llama-cpp-python 0.1.78 3.langchain 0.0.27 chat2data 1.airoboros-l2-7b-2.2.Q4_0.gguf dir1 = "./model_llm/" llm = LlamaCpp(model_path=dir1 + "airoboros-l2-7b-2.2.Q4_0.gguf", n_ctx=1024)#, n_gqa=8) 2.#embeddings = LlamaCppEmbeddings(model_path=dir1 + "airoboros-l2-13b-gpt4-1.4.1.ggmlv3.q4_0.bin")#, n_gqa=8) embeddings = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2") # Equivalent to SentenceTransformerEmbeddings(model_name="all-MiniLM-L6-v2") shibing624/text2vec-base-chinese 3.llama-cpp-python 0.2.6 4.langchain 0.0.300 可以中文問,可能英文答!(中文embedding/英文llm) pip install chromadb要記得重啟kernel!!! |
https://lexica.art/ 老共的問題 1.房價不准下跌,一開始是怕地方政府的擔保品就是房子的資產會縮水 2.貨幣政策,人民幣匯率和資本的自由流通三者不可並得,為了控制人民幣匯率不至於降得太快,會用掉儲存的美元外匯,所以其終局就是外匯會用完,人民幣則是災難式的貶值 | Python https://www.freecodecamp.org/news/object-oriented-programming-python/ | 智慧化系統水鐵分析軟硬體整合應用實作研發 1.影像收集 2.模型訓練 3.硬體選用 4.程式撰寫 5.軟硬體整合 cycle water, analysis, edge computing, training, deploying |
QA: https://towardsdatascience.com/4-ways-of-question-answering-in-langchain-188c6707cc5a summarization: Named Entitieshttps://cobusgreyling.medium.com/using-a-large-language-model-for-entity-extraction-6fffb988eb15 Agent https://gathnex.medium.com/how-to-create-your-own-llm-agent-from-scratch-a-step-by-step-guide-14b763e5b3b8 LLM agents are are programs that use large language models to decide how and when to use tools to complete tasks. https://towardsdev.com/llamaindex-yet-another-powerful-framework-to-build-efficient-knowledge-bots-06065f60605f ASR, speech to text https://deepgram.com/learn/benchmarking-top-open-source-speech-models https://github.com/malceore/voice-assistant-client https://www.assemblyai.com/blog/deepspeech-for-dummies-a-tutorial-and-overview-part-1/ https://www.assemblyai.com/blog/the-state-of-python-speech-recognition-in-2021/ https://github.com/mozilla/DeepSpeech-examples/blob/r0.9/mic_vad_streaming/README.rst | whisper https://github.com/tobiashuttinger/openai-whisper-realtime/blob/main/openai-whisper-realtime.py https://medium.com/@dominique.heer/controlling-your-computer-with-voice-commands-by-using-openai-whisper pip install SpeechRecognition[whisper-local] pyaudio setuptools https://github.com/Uberi/speech_recognition/blob/master/examples/threaded_workers.py https://fahizkp.medium.com/building-a-robust-real-time-transcription-system-with-openais-whisper-6a0b40c4b997 https://pypi.org/project/RealtimeSTT/ https://github.com/SYSTRAN/faster-whisper whisper.cpp https://github.com/ggerganov/whisper.cpp https://github.com/ggerganov/whisper.cpp/tree/master/models https://huggingface.co/ggerganov/whisper.cpp/tree/main https://huggingface.co/learn/audio-course/chapter7/voice-assistant#speech-transcription | 藻類養殖自動化技術研發 1.技術文獻研讀 2.網路架構和基本監視系統 3.藻類生長狀況自動判別技術建立 4.繼代操作自動化技術研發 5.藻類放大養殖自動化技術 6.口語化操作指令可行性 network, CNN;robot arm(motoman python control), python, whisper prompt https://tinyml.substack.com/p/your-go-to-guide-to-master-prompt https://blog.hubspot.com/marketing/write-ai-prompts https://realpython.com/practical-prompt-engineering/#improve-your-output-with-the-power-of-conversation https://www.pinecone.io/learn/series/langchain/langchain-prompt-templates/ https://www.pinecone.io/learn/series/langchain/langchain-retrieval-augmentation/ https://www.youtube.com/watch?v=efIzlP4JT6g https://github.blog/2023-07-17-prompt-engineering-guide-generative-ai-llms/ https://axk51013.medium.com/%E5%B0%88%E6%AC%84-%E5%A6%82%E4%BD%95%E7%94%A8chatgpt%E6%89%93%E9%80%A0%E4%B8%80%E5%80%8Bai%E7%94%A2%E5%93%81-part2-%E5%9F%BA%E7%A4%8Eprompt-engineering%E5%85%A5%E9%96%80-11d6cc3161ac pyttsx3 sudo apt install libttspico-utils
|
# Run with default arguments and small model
./command -m ./models/ggml-small.en.bin -t 8
# On Raspberry Pi, use tiny or base models + "-ac 768" for better performance
./command -m ./models/ggml-tiny.en.bin -ac 768 -t 3 -c 0 whisper on rpi4(python≥3.8) | 生成式 AI 將影響的前 5 大領域為產品設計、軟體開發、客戶互動、行銷公關與供應鏈管理。 ffmpeg on rpi4 sudo apt update && sudo apt upgrade -y sudo apt install ffmpeg -y ffmpeg -i .format .format ex=ffmpeg -i test.mp4 out.mkv ffmpeg -i -vn ex=ffmpeg -i test.mp4 -vn out.mp3 mp3 play with rpi4 pip install python-vlc ===code below=== import vlc from time import sleep p = vlc.MediaPlayer('./five_hundred_miles.mp3') p.play() sleep(20) p.pause() sleep(2) p.play() sleep(20) p.stop() pyttsx3 on rpi4 pip install pyttsx3 sudo apt-get update && apt-get upgrade -y && sudo apt-get install espeak ### speech recognition on rpi ### sudo apt install portaudio19-dev python3-pyaudio flac espeak pip install speechrecognition sounddevice pyaudio | mistral with fine-tuning https://www.analyticsvidhya.com/blog/2023/11/from-gpt-to-mistral-7b-the-exciting-leap-forward-in-ai-conversations/ knowledge graph(KG) https://towardsdatascience.com/how-to-convert-any-text-into-a-graph-of-concepts-110844f22a1a https://github.com/rahulnyk/knowledge_graph/blob/main/extract_graph.ipynb https://towardsdatascience.com/text-to-knowledge-graph-made-easy-with-graph-maker-f3f890c0dbe8 YOLO11 https://learnopencv.com/yolo11/ |