Business model:現有9000/訂製6000 per month如模型訓練或佈署each per week
pm=process monitor;
client:post[Data, File, Img, Void]2get[]4…;
server:get_post[]2[]4…
My showcase Page
價值:多快好省!效率高 品質好 成本低
devices like Raspberry pi/Arduino
tools like CORDS
Rotation2Linear
[化工/木工/人工]
tools like CORDS
1.practice makes perfect
2.indentation is what you need
3.data types
4.list is a gorgeous data type
5.control flow: if
6.control flow: while
7.what to import and what to pip install
8.how to define my function:def
9.always ask chatgpt or gemini
[pibear.net, tpri.site] new site by subdomain, not site;
pibear.net
tsaoming.pythonanywhere.com
llming.pythonanywhere.com remote mysql:
pibear.net(u284407381_db01):srv473.hstgr.io or 31.170.160.154
tpri.site(u929718468_db01):srv1518.hstgr.io or 194.59.164.13
pibear.net:
機組系統水
http://epri.pibear.net/
鍋爐洩水回收
http://tpri.pibear.net/tblTagPhyRONanp_add.php
軟硬體相依性:GPU,OS,IDE,Python,次要模組,主要模組 domain的價值:定義問題,解決問題。Like signal code via gemini, experiment at first, then define full mission and debug...
模組安裝:拔出蘿蔔帶出泥!先安裝主要模組如ultralytics(yolo11),過程即會安裝諸如tensorflow, torch, numpy等輔助模組,安裝 speechrecognition 也有類似的過程!
AI發展人力
python≤3.8 for whisper
whisper
ffmpeg
sudo apt install ffmpeg
FFMPEG 安裝教學(windows)
https://vocus.cc/article/64701a2cfd897800014daed0
1.review
2.design and implement:this image is about my test device for image classification with cnn on raspberry pi for edge computing , please give some nice description to define this image for my research report.2.1:uploaded image is 4 cameras for my image classifiction experiment, please depict each camera with detail.
3.analysis


https://github.com/mermaid-js/mermaid
Solid State Relay+Servo Motor

grashof law
https://makersportal.com/blog/2018/8/23/recording-audio-on-the-raspberry-pi-with-python-and-a-usb-microphone?srsltid=AfmBOorBvh0oYkVz3TJV-EamtYxNCSqKu45Pbxhpq4QU2cNgEZ-Z7woA
https://github.com/Botspot/autostar 
kivy_usbcam_yolo11detect12record2led.py 
https://github.com/Botspot/autostar 
av_overlay4post_file2yolo11cls4mixold.py
pyinstaller --onefile test.py #形成一個.exe(右黑)
pyinstaller -w test.py #隱藏跳出的CMD界面
pyinstaller --onefile -w test.py #合併上述功能(右白)
### folder added as build and dist
Windows鍵+ R;shell:startup打開「啟動」資料夾;將複製的程式捷徑貼到這個「啟動」資料夾中
1Raspberry Pi OS (64-bit); 2pip install ultralytics[export] for YOLO and cv2; 3USB camera; 4realtime
custom data training on colab! 1.yolo11_xml2yolo_split2tvt.ipynb 2.yolo11-train-on-custom-dataset(ok).ipynb
image classification :yolo11cls-train-on-custom-dataset(ok).ipynbinstall ultralytics on pyany!
install vosk/SpeechRecognition/pyaudio
pyaudio for os64:sudo apt install python3-pyaudio;sudo apt install portaudio19-dev;pip install -U pyaudio;sudo apt install -y vlc; pip install python-vlc; pip install pyttsx3; sudo apt install -y espeak
pm5=vosk-Ollama-piper
process message
https://dev.to/ajmal_hasan/setting-up-ollama-running-deepseek-r1-locally-for-a-powerful-rag-system-4pd4
pm3=tblTag/tblValue to csv for autoML2…
https://github.com/rhasspy/piper/releases/tag/2023.11.14-2
gemma3:latest
gemma3n:e4b
ollama on RPI4/5 and win;$ curl -fsSL https://ollama.com/install.sh | sh;1.https://ollama.com/download;2.ollama run deepseek-r1:1.5b[8b, 14b];1.pip install ollama;import ollama
response = ollama.generate(model='gemma:2b',
prompt='what is a qubit?')
print(response['response']);1.pip install langchain;from langchain_community.llms import Ollama
llm = Ollama(model="llama2")
llm.invoke("tell me about partial functions in python");ollama rag with langchain;
LLM(ok)gguf
llm2.pibear.net
1.chat2model:nous-hermes-llama2-13b.Q4_0.gguf中英皆可,效果堪比Bard和chatGPT3.5;mistral_openorca7b堪比70b 2.chat2data(RAG):nous-hermes-llama2-13b.Q4_0.gguf,airoboros-l2-7b-2.2.Q4_0.gguf+shibing624/text2vec-base-chinese; 出考題!Converting between the user domain and document domain is the realm of prompt engineering。the central question of prompt engineering: what kind of document is the model trying to complete?
3.chat2opinion(Prompt):
4.chat2task
mokose/usbcam
camera frame***
#install moviepy, ffmpeg, ffmpeg-python and ImageMagick.exe
!pip install -U openai-whisper
or !pip install git+https://github.com/openai/whisper.git
振動訊號(vibration)
mel spectrogram
https://ketanhdoshi.github.io/Audio-Mel/ ;https://medium.com/analytics-vidhya/understanding-the-mel-spectrogram-fca2afa2ce53
1.audio or waveform to SG to mel SG(non-linear)
2.mel SG:more sensitive to change in low freq!!!
3.rows:200太少,600 is ok
4.n_fft=200, 400. default value=2048 samples, corresponds to a physical duration of 93 milliseconds at a sample rate of 22050 Hz
5.sr=1000, 100, 10. By default, Librosa resamples audio to 22050 Hz;
A series of experiments has been practiced, with three parameters, therefore i just named the results as para1-para2-para3=value, larger value is better. Please write python code to plot to show the results and tell what combination is better or best?The result is as below:mel-raw-3layers=0.697; mel-augment-3layers=0.866; mel-raw-alexnet=0.550;
[predictFe(file_path);predictFe_tfk(file_path)]
價值:ppb等級的量測傳統上都要用上ICP mass,千萬等級的設備
img_average_hsv to create new conc
yolo11cls4feLabEpri6618hsv25epoc.pt
1.realtime and save to video;
waitKey;namedWondow;pre define size;xvid to avi
2.829post_file2get_result_marine
3.829post_file2get_result_marine_show
util.pibear.net campi_capt2motion9neo5