AWS AI Devices

AWS DeepLens (Supervised Learning)

AWS DeepLens allows you to create and deploy end-to-end computer vision–based applications.

AWS DeepRacer (Reinforcement Learning)

DeepRacer uses 2 training Algorithms:

  1. Soft Actor Critic

    • embraces exploration,

    • is data effficient

    • lacks stability

    • works only is Continous Action Space

  2. Proximal Policy Optimization

    • stable

    • data hungry

    • works in both Discrete as well as in Continous Action Space

  • An action space is the set of all valid actions, or choices, available to an agent as it interacts with an environment.

    • Discrete action space represents all of an agent's possible actions for each state in a finite set of steering angle and throttle value combinations.

    • Continuous action space allows the agent to select an action from a range of values that you define for each state.

  • Hyperparameters are variables that control the performance of your agent during training.

    • For example, the learning rate is a hyperparameter that controls how many new experiences are counted in learning at each step. A higher learning rate results in faster training but may reduce the model’s quality.

  • The reward function's purpose is to encourage the agent to reach its goal. Figuring out how to reward which actions is one of your most important jobs.

Exploration versus exploitation:

  • When a car first starts out, it explores by wandering in random directions. However, the more training an agent gets, the more it learns about an environment. This experience helps it become more confident about the actions it chooses.

  • Exploitation means the car begins to exploit or use information from previous experiences to help it reach its goal. Different training algorithms utilize exploration and exploitation differently.

  • An agent should exploit known information from previous experiences to achieve higher cumulative rewards, but it also needs to explore to gain new experiences that can be used in choosing the best actions in the future.

AWS DeepComposer (Unsupervised Learning)

Each AWS DeepComposer Music studio experience supports three different generative AI techniques: generative adversarial networks (GANs), autoregressive convolutional neural networks (AR-CNNs), and transformers.

  • Use the GAN technique to create accompaniment tracks.

  • Use the AR-CNN technique to modify notes in your input track.

  • Use the transformers technique to extend your input track by up to 30 seconds.

Practical Uses of GAN's

  • Drug design and discovery

  • Creating original art pieces

  • Cancer detection

Generator

  • The generator takes in a batch of single-track piano rolls (melody) as the input and generates a batch of multi-track piano rolls as the output by adding accompaniments to each of the input music tracks.

  • The discriminator then takes these generated music tracks and predicts how far they deviate from the real data present in the training dataset. This deviation is called the generator loss. This feedback from the discriminator is used by the generator to incrementally get better at creating realistic output.

Discriminator

  • As the generator gets better at creating music accompaniments, it begins fooling the discriminator. So, the discriminator needs to be retrained as well. The discriminator measures the discriminator loss to evaluate how well it is differentiating between real and fake data.

Beginning with the discriminator on the first iteration, we alternate training these two networks until we reach some stop condition; for example, the algorithm has seen the entire dataset a certain number of times or the generator and discriminator loss reach some plateau (as shown in the following image).

  • Generator: A neural network that learns to create new data resembling the source data on which it was trained.

  • Discriminator: A neural network trained to differentiate between real and synthetic data.

  • Generator loss: Measures how far the output data deviates from the real data present in the training dataset.

  • Discriminator loss: Evaluates how well the discriminator differentiates between real and fake data.

Last updated