Become a leader in the IoT community!
Join our community of embedded and IoT practitioners to contribute experience, learn new skills and collaborate with other developers with complementary skillsets.
Join our community of embedded and IoT practitioners to contribute experience, learn new skills and collaborate with other developers with complementary skillsets.
@wafa_ath hii, i wanna ask you , so my teacher told me to use tranfer learning and train my dataset with a pretrained model is that possible ? i used to use the pretrained model as it is…
Yes it’s possible
Actually it’s just extra steps with the pretrained model
So he is giving up the unserpervised learning method? @pieweii
let me explain for u qhat happened exactly this might be sooo long but:
i used only transfer learning at the beginning with resnet50
the result was good but sometimes it messes up with color
so i made a color extractor and it uses color algorithm
and i used a pretrained model (resnet) for extracting features other than color (i also converted images to gray scale before this step to just focus on features beside color)
every feature extractor produces a linkage matrix that contains distances
i combined both matrixes and i with adjusting weights i got a final combined matrix that i used later with clustering algorithm such as hierarchical
that worked perfectly honestly
yesterday i went to teacher he told me pretrained models are much better than making model from scratch because they maintain weights (like freezing some layers) he told me why dont u train a pretrained on your dataset (i have 529 unlabled img in dataset)
soo before yesterday someone ik in ml suggested me and said why dont u do your own model instead of using pretrained model since u have a large dataset of images sooo, i made my own model with pytorch and trained it and yeh the result was good
at the end i will use it as feature extractor but i need the model first
im kinda messed up i couldn’t ask the teacher how can i train a pretrained model on my dataset i felt i will look st_upid maybe, i went back home i searched it i found something callled fine tuning
my brain is gonna explode someday
The pretrained model are much better
and now he gave me time untill tomorrow to finish everything about ai and rapport ai part
Well your teacher is right
yeh i though so too
but how to adapt that with my dataset
training dataset*
Just load the model without the final layer output layer
So u told me you you worked with renset50,
In your case (type of fabri for example
Then add your own classification head
You can reuse 90% of your work
Since you are already familiar with that and you done it
Only the final part
to combine witu the color extractor
but i cant use on features extraction only if its trained on my data
the thing is im planning to use as feature extractor
right?
No, you can
then if i use as feature extractor only .. what i will do with the dataset i collected … it feels like wasted energy
should i just throw it away
You used resnet50 , extract the feature form it
i have a code that i have done like this i will show u
like for testing?
No
here i used it as feature extractor
Like train a layer with your own data , it’s called fine tuning
No don’t worry about it
So what am saying is to use the rensnet50 (that the pretrained model) , use it to extract the feature from the fabric, that is the transfer learning
to make it better and to get used of your dataset and make your model better and personalized do the fine tuning
i will try that right now
ah im sorry im making u explain so much, i guess im not understand the whole phenomenon ππ»ββοΈ so you are saying i gotta use transfer learning and then fine tune it with my datat then use it as fearure extractor?
ofc, i will be whole day working and updating u β¨οΈ
Let me updated π
dont get tired of me π₯²
i won’t it my pleasure
i will send u file and execution
i user llm to make code
but i understand nthg from it i will read it carefully later and explain line by line i was in hurry
fine_tuning.txt
here
batch them using a tf.data.Dataset or at least group them manually
this is a big m_ess that i have no idea what it is but it worked.. when i tried that feature extractor
batch_size=1 , that would take forever
also Too many clusters (66) You need thousands of images or it will give bad clusters.
oh
you only used 3 images right ?
528
it took 30 mins to run
i felt there is something wrong about it
and then when it was saved i used as feature extractor in my original code
528 contain 66 clusters
aa never mind , i get confuse with somthing else
ah should i prganize the dataset?
organise
oh its alright
like as i searched it was like u should use k means on your dataset folder and set number of clusters because all my images in same file and unlabelled
so its not bad too much
it’s great that it only took 30mn
no it’s not
it’s good
i was like no way im going to read more i hotta make sure this gonna work first my head was honna explode of learning literally
so im gonna study that one then
oh, ask gpt to summrize them haha, best of luck
the teacher will ask me about all functions π₯²
omg , your words is so hartwarming , don’t worry about it girl β€οΈ
yeh im gonna read one by one and understand what’s happening inside and thank you so muchh @wafa_ath i will never forget your help my whole life ππ»β€οΈβ¨οΈ
i will make sure i return this favor as long as im alive β¨οΈ
the reason im thankful to you it’s because you wre the only one who helped me even tho i knew anything about all this and how i took my first steps. neither the teacher ir enterprise cared and it was just me and i joined 2 more server before this i was wishing i find someone that knows about this i searched everywhere because there was no hope left chat gpt messed enough with me and everutime i saw something i had to open YouTube books articles and everytime i learn a new thing i doesnt work at the end. that made my brain so blocked like i just want this to work i dont wann understand anything else i read about contrastive learning transfer learning kmeans hierarchical clustering dbcan elbow beale duda and hart methods feature extraction, i used only 2 of them at the end
hehe, but i would say it was good to learn, and you are my saver ππ»β¨οΈ
But that is a journey of learning, happy to ba part of it, i will be here for you, and i hope someone else find out convo and help him too ..
i was thinking istead of using kmeans for clustering dataset to for pseudo labeling why dont i just cluster then by myself in folder because using kmeans might make mistakes?
Did you try that yet ?
yehh i did
i also found a solution for getting good threshold depending on my inputs i found in an article i will share it with you tomorrow
How’s the results
yehh it did
oh wait
i answered on wrong msg
https://www.mdpi.com/1999-4893/15/5/170
its clustering well honestly
this link contains how to estimate the value of threshold
idk if you hear about weibull
CONTRIBUTE TO THIS THREAD