Wildfires Tutorial

I just finished using the wildfire model and I am trying to use it. I have been using the trained model together with the training images so there shouldn’t be any difficulty. The problem is that the following code appears to predict fire, no matter what image I put into it. Where am I going wrong?

from PIL import Image
import numpy as np
from tensorflow import keras

#Loads the model
path_to_model = "d:/Perceptilabs/Exported Models/Wildfires/Model 3"
model = keras.models.load_model(path_to_model)

#Load an image; requires (1, 250, 250, 3) 
image = Image.open("C:/Users/USER/Desktop/abc195.jpg")
image = np.expand_dims(image, axis=0) 

#Makes a prediction
prediction1 = model(image)
print('The prediction is ',(np.asarray(prediction1['labels'])))

This returns

The prediction is [b’fire’]

Thanks in advance for any help.

Edit,

After working through the Covid/Viral Pneumonia tutorial, I’m now convinced that this is because of the trained model. I realised that it generates better predictions if you use cross entropy as the loss. I still don’t understand the differences between the 8 Deep Learning options, but I’ll read up on those.

However, how do I get rid of the b? That is there on all labels.

Sorry for the triple post. Feel free to delete this thread; although, someone might find it useful.

I changed the loss to cross-entropy and got much better results. Out of the training folder, I tried images 12345 and 196, 197, 198, 199 and 200. It only got one wrong. Much better than before when it had poor accuracy with quadratic loss function.

Using this code, I was able to download jpegs off of the web and test them. It works!!

from PIL import Image
import numpy as np
from tensorflow import keras

#Loads the model
path_to_model = "d:/Perceptilabs/Exported Models/Wildfires/Model 5"
model = keras.models.load_model(path_to_model)

size  = (250,250)

#Load an image; requires (1, 250, 250, 3) 
image = Image.open("C:/Users/USER/Desktop/download.jpg")
image = image.resize(size)
image = np.expand_dims(image, axis=0)

#Makes a prediction
prediction1 = model(image)
print('The prediction is ',(np.asarray(prediction1['labels'])))
2 Likes

Great to hear that you got it working! :slight_smile:
And to get rid of [b’fire’] in the prediction, try this as a print statement instead:

print('The prediction is ',prediction1[‘labels’])

Thanks @robertl ,

Unfortunately not, that gives me:

The prediction is tf.Tensor([b'nofire'], shape=(1,), dtype=string)

Re the “b”… that used to bug me a lot, generally - nothing to do with PL.

This link will explain the (brief) details better, but basically b’…’ is indicating a string specified sequence of BYTES, which could be used as characters. It’s a python thing.

If PL is creating code like b’…’ when characters are intended (labels could surely be unicode) maybe that is something for them to look at.

Thank you @JulianSMoore . I changed this to remove the b.

from PIL import Image
import numpy as np
from tensorflow import keras

#Loads the model
path_to_model = "d:/Perceptilabs/Exported Models/Wildfires/Model 5"
model = keras.models.load_model(path_to_model)

size  = (250,250)

#Load an image; requires (1, 250, 250, 3) 
image = Image.open("C:/Users/USER/Desktop/download.jpg")
image = image.resize(size)
image = np.expand_dims(image, axis=0)

#Makes a prediction
prediction1 = model(image)
print('The prediction is ',(np.asarray(prediction1['labels'], dtype='str_')))
1 Like

Nice! Took me a while to spot the dtype='str' - numpy to the rescue again :slight_smile:

I would still be interested to hear from @robertl about why the b’ ’ in the first place though.

1 Like

Great that you guys found a fix! :slight_smile:

Hmm, I’ll need to check with the devs where that b’ ’ is introduced, will be back with some info on that later.

1 Like

Got an update on the b’ ':

It’s something which happens from TensorFlow.
x = tf.constant(‘abc’) is a tf.string, but you get back “bytes”-type data when it gets evaluated through x.numpy(). So I recommend to run x.numpy().decode() to get the normal string format.

(Paraphrasing a little during the translation from Swedish to English)

Hi @robertl - thanks for looking into that.

That’s interesting… does that mean one can’t use UTF-8 as labels, or just that internally any Unicode becomes “just a string of bytes” until decoded again?

If I remember to, I might create a trivial Unicode labelled classification to see what happens in PL - unless you can confirm either way

Ah! What happens if the labels are in Swedish for example…? Plenty of non-ascii chars there :wink:

I tried quite hard to get a .decode() on the end but I couldn’t get the normal string out.

@JWalker strange that .decode() didn’t want to work, did you not have a numpy object maybe?
Anyway, if your existing method already works then that’s fine as well :slight_smile:

@JulianSMoore Haha I’ll have to try to create a Swedish dataset, I’m not sure exactly what would happen. I would be surprised if TF causes those kind of crashes in a conversion though. (Side note, isn’t åäö in the ascii table?)

Side note, isn’t åäö in the ascii table?

Not in ordinary ASCII… Extended ASCII from 128… 255 (see here)

Cyrillic then - that’ll take you out of ASCII completely very quicklty