Natural language processing has garnered great attention of late for two reasons — there is so much room for improvement and any chance of success being immensely rewarding.
A wrongly forwarded documentation or response can create a lot of miscommunication amongst the different departments within a company and more so with a client. So care should be taken when deploying automated communication systems; to have a human-like understanding of language.
The Salesforce research team propose a model which brings together the attributes of knowledge graphs and machine learning.
Salesforce’s AI assistant Einstein depends on machine learning models for its functioning.
So the research team at Salesforce developed an approach for query answering tasks(QA), called Multi-hop graph reasoning.
By combining the graph-based approach of semantics and the attributes of ML modelling, the researchers aim to find the sweet spot between quantitative and qualitative aspects of a model.
Knowledge Graphs And NLP
Large scale knowledge graphs are usually known for their ability to support NLP applications like semantic search or dialogue generation.
Knowledge graphs are intrinsically incomplete and adopting reinforcement learning gives the model a better chance in targeted searches.
But this approach, too, comes with its own set of challenges, such as an agent arriving at a correct answer whose link to its source is missing from training graph. The agent can also take up unwanted or incorrect pathways just because there was a correct answer in the past.
So, the researchers at Salesforce adopted a pre-trained embedding based model to estimate a soft reward for reaching a target. Embedding models because they are good at capturing the traits of semantics.
Along with this, to enforce effective exploration, the outgoing edges are randomly blocked. This reduces the influence of bad paths in the network- ACTION DROPOUT
Source: Salesforce research
In more complex tasks, embedded models miss out on the symbolic compositionality of knowledge graphs. This was addressed using the reward shaping strategy- multi-hop reasoning.
Multi-hop reasoning enables the model to learn from relational paths in the knowledge graphs.
This model demonstrates that reward shaping can be used as a regularizer to alleviate the noisy pathways which are biased towards prior knowledge.
To enlist unique predictions, beam search decoding is used. This helps in avoiding repetition, which occurs due to multiple paths search for a single target. These unique entities are then ranked according to the score assigned to them.
Every link in the graph is treated as bi-directional. This helps in searching in reverse order as well.
The datasets used for this experiment include medical language systems and graph reasoning systems among others.
Read more about the results here
- Reduction in the impact of false negative supervision with the adoption of a pre-trained one-hop embedding model.
- Using randomly generated edge masks will result in a decrease the bad connections, which counters the diversity in search; a result of reinforcement learning bias.
- This method improves pre-existing path based KGQA models and can toe to toe with the current embedding based models.
- While the performance gap between RL-based approaches and the embedding based approaches for knowledge graph based query-answering still remains.
The main aim of this research is to improve the already smart Einstein AI, which has been seamlessly integrated into the workflow. It helps the company to leverage the data such as customer data, emails or even IoT signals to train predictive models.
Text mining forms an important aspect of predictive model building and with improved QA system, Salesforce looks to equip Einstein AI and make it smarter.
Adopting reinforcement learning to graph-based reasoning to make the model search for responses more efficiently opens up other interesting avenues in the researchers attempt to attain a human-level understanding of language.