Why Machine Learning is useful?


Machine Learning involves
1. Fraud Detection
2. Traffic Alerts on Google Maps
3. Google Translate
4. Automatic Spam Filtering
Hi there, have you managed to intertwine the concept of machine learning with Forex trading, or is this post more of a general observation on ML?

Have you read Fabrice Daniel's blog on Deep Learning? His backtesting experiments using TensorFlow created in Golang look very interesting!
Without getting into too much technical jargon I will try to answer the title question.

  1. Machine Learning covers a wide range of techniques (algorithms), which are all useful for different situations, allowing for some overlap. Some learning methods are called "supervised" or "unsupervised". Supervised means to use human input as examples for a categorization algorithm. Unsupervised is more free-form, giving the algorithm freedom to take action in some sort of environment at will and learn from it.
  2. Some specific algorithms are great for applying to trading. Of course, it is important for the person who uses the algorithms to understand how it is useful for a specific type of strategy. For instance, LSTM (Long-Short term memory) algorithm is a great way to make predictions on sequential data. Think of this like, a way to remember patterns over time. This would be best used as a part of a larger algorithm for long term strategy and price direction prediction.
  3. Other good algorithms for trading, but specifically live trading include Deep-Q learning. This is a type of Reinforcement Learning (RL). Think of a system that learns how to play a specific video game. It has actions it can perform and the environment gives it consequences of it's actions. If they are "good" consequences, the system reinforces some specific behavior. The "deep" part comes in because there can be many levels of behavior that it learns for specific conditions in it's environment... so it can learn strategy, and then exploit that strategy.
Any system lke the ones you mention: fraud detection, google translate, automatic spam filtering, (and I'm not sure how traffic notifications actually uses learning because it seems pretty straightforward of a situation... but then again, I am never in traffic, so I don't use that app)... any of these systems are using different types of learning. Google Translate uses what's called an Auto-Encoder.. of course, their own Google version of an auto-encoder mixed with maybe some other methods.

Auto-encoders are unsupervised learning, so they learn without user examples. However, Google Translate also used to ask people for feedback on the translations, which hints that they are using a GAN technique (Generative Adversarial Network). These types of machine learning algorithms can literally learn how to draw fully photorealistic human faces from scratch: https://thispersondoesnotexist.com/ <--- just load the page, then hit refresh to see a new face... it's creepily good. This is because inside of the GAN idea, exists the idea of critique. The "Adversarial" part is actually because a GAN is two networks, not one. The first network tries to draw a face and tries to get better at drawing faces... the second network compares that face to real faces and tries to get better at discerning fake faces from real faces.... The two networks are tied together in a feedback loop ... so imagine like an artist and a critic. The artist paints a face, the critic tells the artist his painting sucks, but not why. The artist is then forced to paint a new face and the little dance continues until the critic says "okay, this is a pretty realistic face". If you could take this entire process and make a computer do it, that would be a GAN.

Instead of using another neural network as a critic, you could just as easily use the market itself and whether or not the actions of the generator network (the artist from the example) make a profit.

I could really go into all of your examples, but I don't want to make a super long second post here, hahahaha.

I hope this was helpful.