In this tutorial, we will build our very first model with TinyMind. The model is MNIST, or hand-written digit recognition, the canonical “hello world” model for deep learning. You can find a pre-trained one at tutorials/tensorflow-mnist. In less than 5 minutes, we will train a neural network with TensorFlow that produces 97% accuracy on this task. Let’s get started!
In TinyMind, a training session of a model is called an Execution. To create a new training session, click on the New Execution button on your home feed. You will also find this button in each model’s profile page, which will prefill information of that model for you, allowing you to quickly execute an existing model.
For now though, select Create new model to create your first model. Name your model mnist or anything else you’d like, and click on the Create New Model button. Next we will be filling out the details of the model.
Environment refers to what machine learning library to use. We will be using TensorFlow 1.2 with Python 3.6. Other popular frameworks are also available. They are performance-optimized builds maintained by TinyMind, and you never need to worry about how to set them up.
Most libraries you will likely need, such as
scikit-learn, are already included in the environment. Check out the environment page for the full list. If you need any additional dependencies, enter them in the Dependencies field, and TinyMind will make sure they exist before starting your execution. We can skip it for now, as no custom dependencies are needed for MNIST.
Next up, let’s provide code for the model. There are 3 options: entering code directly, uploading an archive that contains all needed files, and linking directly to GitHub. For production models, consider hosting your code on GitHub and link it to TinyMind. However, if you are quickly trying things out, as we are doing now, entering code directly is the simplest option. You can find the code for MNIST in the sample model. Copy-paste the code into the code field.
The data section is where you can specify if there are any datasets on TinyMind that should be made available to your model. We will skip this section for now - the MNIST code already takes care of its input data. Be sure to check out public datasets on TinyMind and the tutorial on datasets to learn more.
The best thing about TinyMind is that it allows you to quickly try out parameters without modifying your code. For our model, enter the following params:
How does it work? TinyMind converts the parameters into command-line arguments and passes them to your code. This means that if you are using
argparse, your model will automatically receive the entered parameters.
import argparse parser = argparse.ArgumentParser() parser.add_argument('--fake_data', nargs='?', const=True, type=bool, default=False, help='If true, uses fake data for testing.') parser.add_argument('--iterations', type=int, default=1000, help='Number of steps to run trainer.') # ...and a bunch of other parameters, and then: FLAGS, unparsed = parser.parse_known_args() # Then use the FLAGS like: print(FLAGS.iterations) # prints 1000
It’s a better idea to separate parameters from code, though. To do so, use the convenient
tinyenv package already included in the environment to automatically pick up parameters, like such:
from tinyenv.flags import flags # All parameters entered in the UI are automatically available. FLAGS = flags() # Use parameters the same way: print(FLAGS.iterations) # prints 1000
Finally, choose the desired resource to train the model with. MNIST is a fairly small model that can be trained on a 1 CPU machine in a few minutes, so let’s select that option. More powerful CPU and GPU instances are available for your more advanced models. Don’t worry if you are not sure how much resource you will need - you can kill an execution at anytime.
With that, press the magic green button to kick off the training!
Congrats - the execution has started! You should be automatically redirected to the execution page, and you will see output logs starting to appear shortly after.