{"_id":"584aeea99588370f00608ab8","version":{"_id":"584aeea89588370f00608a3b","project":"559ae8ec7ae7f80d0096d813","__v":1,"createdAt":"2016-12-09T17:49:28.502Z","releaseDate":"2016-12-09T17:49:28.502Z","categories":["584aeea89588370f00608a3c","584aeea89588370f00608a3d","584aeea89588370f00608a3e","584aeea89588370f00608a3f","584aeea89588370f00608a40","584aeea89588370f00608a41","584aeea89588370f00608a42","584aeea89588370f00608a43","584aeea89588370f00608a44","584aeea89588370f00608a45","584aeea89588370f00608a46","584aeea89588370f00608a47","584aeea89588370f00608a48","584aeea89588370f00608a49","584aeea89588370f00608a4a","584aeea89588370f00608a4b","584aeea89588370f00608a4c","584aeea89588370f00608a4d","584aeea89588370f00608a4e","584aeea89588370f00608a4f"],"is_deprecated":false,"is_hidden":false,"is_beta":false,"is_stable":true,"codename":"","version_clean":"4.2.3","version":"4.2.3"},"category":{"_id":"584aeea89588370f00608a4e","version":"584aeea89588370f00608a3b","project":"559ae8ec7ae7f80d0096d813","__v":0,"sync":{"url":"","isSync":false},"reference":true,"createdAt":"2015-07-07T21:29:25.650Z","from_sync":false,"order":18,"slug":"other-resources","title":"Other resources"},"parentDoc":null,"project":"559ae8ec7ae7f80d0096d813","user":"559ae88c7ae7f80d0096d812","__v":0,"updates":[],"next":{"pages":[],"description":""},"createdAt":"2015-07-07T21:30:48.022Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[]},"settings":"","auth":"required","params":[],"url":""},"isReference":true,"order":3,"body":"[block:api-header]\n{\n  \"type\": \"basic\",\n  \"title\": \"Detailed Mode Output\"\n}\n[/block]\nDetailed Mode performs analysis on individual documents. In the Semantria API the user can customize almost every part of the analysis; from constraining the number of results for each category to defining the parts of speech which the server will detect, the user can configure Detailed Mode to suit your needs in document sentiment analysis. In this section, we provide a quick reference for customizable options and parameters for POS tagging, as well as a detailed explanation of Detailed Mode's output.\n[block:api-header]\n{\n  \"type\": \"basic\",\n  \"title\": \"Line-by-line Term Explanation\"\n}\n[/block]\nThis output is from analyzing the text below. Take note that it has been abbreviated is some spots for clarity.\n\n*Recently a toy catalog went out. If you did not get one you are SOL. I just called the corporate and they were mailed at random and put only in certain news papers. You can not get them at the store or ask corporate to send you one. There were some really good coupons like this week 25% off one toy. There was a different coupon each week. I think that this is so wrong. I for one have a target card just for Christmas shopping and would have loved those coupons. And since I have a card I would think a catalog should have been sent to me. Guess who will not be shopping at Target this year!* \n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{\\n\\t# This array shows all the auto categories (wikipedia categories) found in the text\\n\\t\\\"auto_categories\\\": [{ \\n\\t\\t\\t# These are the categories within the auto category. Note that not all auto categories will have categories\\n\\t\\t\\t\\\"categories\\\": [{\\n\\t\\t\\t\\t\\t\\\"sentiment_polarity\\\": \\\"neutral\\\",\\n\\t\\t\\t\\t\\t\\\"sentiment_score\\\": 0.4875,\\n\\t\\t\\t\\t\\t\\\"strength_score\\\": 0.5772358,\\n\\t\\t\\t\\t\\t\\\"title\\\": \\\"Bonds\\\",\\n\\t\\t\\t\\t\\t\\\"type\\\": \\\"info\\\"\\n\\t\\t\\t\\t}, {\\n\\t\\t\\t\\t\\t\\\"sentiment_polarity\\\": \\\"neutral\\\",\\n\\t\\t\\t\\t\\t\\\"sentiment_score\\\": 0.4875,\\n\\t\\t\\t\\t\\t\\\"strength_score\\\": 0.523945,\\n\\t\\t\\t\\t\\t\\\"title\\\": \\\"Discount_stores\\\",\\n\\t\\t\\t\\t\\t\\\"type\\\": \\\"info\\\"\\n\\t\\t\\t\\t}, {\\n\\t\\t\\t\\t\\t\\\"sentiment_polarity\\\": \\\"neutral\\\",\\n\\t\\t\\t\\t\\t\\\"sentiment_score\\\": 0.4875,\\n\\t\\t\\t\\t\\t\\\"strength_score\\\": 0.5088412,\\n\\t\\t\\t\\t\\t\\\"title\\\": \\\"Government_bonds\\\",\\n\\t\\t\\t\\t\\t\\\"type\\\": \\\"info\\\"\\n\\t\\t\\t\\t}, {\\n\\t\\t\\t\\t\\t\\\"sentiment_polarity\\\": \\\"neutral\\\",\\n\\t\\t\\t\\t\\t\\\"sentiment_score\\\": 0.4875,\\n\\t\\t\\t\\t\\t\\\"strength_score\\\": 0.46849313,\\n\\t\\t\\t\\t\\t\\\"title\\\": \\\"Private_currencies\\\",\\n\\t\\t\\t\\t\\t\\\"type\\\": \\\"info\\\"\\n\\t\\t\\t\\t}\\n\\t\\t\\t],\\n\\t\\t\\t# This is the sentiment polarity for the auto category\\n\\t\\t\\t\\\"sentiment_polarity\\\": \\\"neutral\\\",\\n\\t\\t\\t# This is the sentiment score for the auto category\\n\\t\\t\\t\\\"sentiment_score\\\": 0.4875,\\n\\t\\t\\t# This is the relevance score for the auto category\\n\\t\\t\\t\\\"strength_score\\\": 0.8387881,\\n\\t\\t\\t# This is the title of the auto category\\n\\t\\t\\t\\\"title\\\": \\\"Business\\\",\\n\\t\\t\\t# This is the type of auto category - node for a category that can contain other categories (as this example does), leaf for categories at the end of the tree\\n\\t\\t\\t\\\"type\\\": \\\"node\\\"\\n\\t\\t}\\n\\t],\\n\\t# This is the ID of the config used to process the data\\n\\t\\\"config_id\\\": \\\"486e1e63-2748-4a38-a457-862d92219fec\\\",\\n\\t# This array give the details of the document. Each element in the array is a sentence. Only a single sentence from the example text is shown here due to length. \\n\\t\\\"details\\\": [{\\n\\t\\t\\t# If the sentence is imperative or not\\n\\t\\t\\t\\\"is_imperative\\\": false,\\n\\t\\t\\t# If the sentence should carry sentiment\\n\\t\\t\\t\\\"is_polar\\\": true,\\n\\t\\t\\t# This array lists all of the words in the sentence\\n\\t\\t\\t\\\"words\\\": [{\\n\\t\\t\\t\\t\\t# Was the word negated by a negator\\n\\t\\t\\t\\t\\t\\\"is_negated\\\": false,\\n\\t\\t\\t\\t\\t# Sentiment score for the word\\n\\t\\t\\t\\t\\t\\\"sentiment_score\\\": 0.0,\\n\\t\\t\\t\\t\\t# Stemmed form of the word\\n\\t\\t\\t\\t\\t\\\"stemmed\\\": \\\"recent\\\",\\n\\t\\t\\t\\t\\t# Part of speech tag. See http://dev.lexalytics.com/wiki/pmwiki.php?n=Main.POSTags for all tags\\n\\t\\t\\t\\t\\t\\\"tag\\\": \\\"RB\\\",\\n\\t\\t\\t\\t\\t# Actual word\\n\\t\\t\\t\\t\\t\\\"title\\\": \\\"Recently\\\",\\n\\t\\t\\t\\t\\t# Normalized part of speech tag\\n\\t\\t\\t\\t\\t\\\"type\\\": \\\"Adjective\\\"\\n\\t\\t\\t\\t}, {\\n\\t\\t\\t\\t\\t\\\"is_negated\\\": false,\\n\\t\\t\\t\\t\\t\\\"sentiment_score\\\": 0.0,\\n\\t\\t\\t\\t\\t\\\"stemmed\\\": \\\"a\\\",\\n\\t\\t\\t\\t\\t\\\"tag\\\": \\\"DT\\\",\\n\\t\\t\\t\\t\\t\\\"title\\\": \\\"a\\\",\\n\\t\\t\\t\\t\\t\\\"type\\\": \\\"Determiner\\\"\\n\\t\\t\\t\\t}, {\\n\\t\\t\\t\\t\\t\\\"is_negated\\\": false,\\n\\t\\t\\t\\t\\t\\\"sentiment_score\\\": 0.0,\\n\\t\\t\\t\\t\\t\\\"stemmed\\\": \\\"toy\\\",\\n\\t\\t\\t\\t\\t\\\"tag\\\": \\\"NN\\\",\\n\\t\\t\\t\\t\\t\\\"title\\\": \\\"toy\\\",\\n\\t\\t\\t\\t\\t\\\"type\\\": \\\"Noun\\\"\\n\\t\\t\\t\\t}, {\\n\\t\\t\\t\\t\\t\\\"is_negated\\\": false,\\n\\t\\t\\t\\t\\t\\\"sentiment_score\\\": 0.0,\\n\\t\\t\\t\\t\\t\\\"stemmed\\\": \\\"catalog\\\",\\n\\t\\t\\t\\t\\t\\\"tag\\\": \\\"NN\\\",\\n\\t\\t\\t\\t\\t\\\"title\\\": \\\"catalog\\\",\\n\\t\\t\\t\\t\\t\\\"type\\\": \\\"Noun\\\"\\n\\t\\t\\t\\t}, {\\n\\t\\t\\t\\t\\t\\\"is_negated\\\": false,\\n\\t\\t\\t\\t\\t\\\"sentiment_score\\\": 0.0,\\n\\t\\t\\t\\t\\t\\\"stemmed\\\": \\\"go\\\",\\n\\t\\t\\t\\t\\t\\\"tag\\\": \\\"VBD\\\",\\n\\t\\t\\t\\t\\t\\\"title\\\": \\\"went\\\",\\n\\t\\t\\t\\t\\t\\\"type\\\": \\\"Verb\\\"\\n\\t\\t\\t\\t}, {\\n\\t\\t\\t\\t\\t\\\"is_negated\\\": false,\\n\\t\\t\\t\\t\\t\\\"sentiment_score\\\": 0.0,\\n\\t\\t\\t\\t\\t\\\"stemmed\\\": \\\"out\\\",\\n\\t\\t\\t\\t\\t\\\"tag\\\": \\\"RP\\\",\\n\\t\\t\\t\\t\\t\\\"title\\\": \\\"out\\\",\\n\\t\\t\\t\\t\\t\\\"type\\\": \\\"Misc\\\"\\n\\t\\t\\t\\t}, {\\n\\t\\t\\t\\t\\t\\\"is_negated\\\": false,\\n\\t\\t\\t\\t\\t\\\"sentiment_score\\\": 0.0,\\n\\t\\t\\t\\t\\t\\\"stemmed\\\": \\\".\\\",\\n\\t\\t\\t\\t\\t\\\"tag\\\": \\\".\\\",\\n\\t\\t\\t\\t\\t\\\"title\\\": \\\".\\\",\\n\\t\\t\\t\\t\\t\\\"type\\\": \\\"Symbol\\\"\\n\\t\\t\\t\\t}\\n\\t\\t\\t]\\n\\t\\t}, {\\n\\t\\t\\t# Other sentences omitted for length\\n\\t\\t}\\n\\t],\\n\\t# This array lists all entities found\\n\\t\\\"entities\\\": [{\\n\\t\\t\\t# Did the entity match the optional confidence query\\n\\t\\t\\t\\\"confident\\\": true,\\n\\t\\t\\t# What type of entity is it\\n\\t\\t\\t\\\"entity_type\\\": \\\"Company\\\",\\n\\t\\t\\t# How much sentiment evidence is there?\\n\\t\\t\\t\\\"evidence\\\": 1,\\n\\t\\t\\t# Was this entity a focus of the text?\\n\\t\\t\\t\\\"is_about\\\": false,\\n\\t\\t\\t# The label of the entitiy. This can be overridden in user-defined entities.\\n\\t\\t\\t\\\"label\\\": \\\"Company\\\",\\n\\t\\t\\t# Array of actual mentions of the entity.\\n\\t\\t\\t\\\"mentions\\\": [{\\n\\t\\t\\t\\t\\t# Was the entity negated ?\\n\\t\\t\\t\\t\\t\\\"is_negated\\\": false,\\n\\t\\t\\t\\t\\t# Actual word found in text\\n\\t\\t\\t\\t\\t\\\"label\\\": \\\"Target\\\",\\n\\t\\t\\t\\t\\t# Locations info can be ued for hit-highlighting.\\n\\t\\t\\t\\t\\t\\\"locations\\\": [{\\n\\t\\t\\t\\t\\t\\t\\t\\\"length\\\": 6,\\n\\t\\t\\t\\t\\t\\t\\t\\\"offset\\\": 582\\n\\t\\t\\t\\t\\t\\t}\\n\\t\\t\\t\\t\\t]\\n\\t\\t\\t\\t}\\n\\t\\t\\t],\\n\\t\\t\\t# Sentiment for the entity in words\\n\\t\\t\\t\\\"sentiment_polarity\\\": \\\"neutral\\\",\\n\\t\\t\\t# Sentiment for the entity as a float\\n\\t\\t\\t\\\"sentiment_score\\\": 0.0,\\n\\t\\t\\t# This entity does not have themes, but if it did this is where you would see the themes associated with the entity\\n\\t\\t\\t\\\"themes\\\": [{\\n\\t\\t\\t\\t\\t# Amount of sentiment evidence for this theme\\n\\t\\t\\t\\t\\t\\\"evidence\\\": ###,\\n\\t\\t\\t\\t\\t# Is this theme a focus of the text?\\n\\t\\t\\t\\t\\t\\\"is_about\\\" : (true or false),\\n\\t\\t\\t\\t\\t# Array of actual mentions of the theme\\n\\t\\t\\t\\t\\t\\\"mentions\\\": [{\\n\\t\\t\\t\\t\\t\\t\\t\\\"is_negated\\\": (true or false),\\n\\t\\t\\t\\t\\t\\t\\t\\\"label\\\": \\\"\\\",\\n\\t\\t\\t\\t\\t\\t\\t\\\"locations\\\": [{\\n\\t\\t\\t\\t\\t\\t\\t\\t\\t\\\"length\\\": ###,\\n\\t\\t\\t\\t\\t\\t\\t\\t\\t\\\"offset\\\": ###\\n\\t\\t\\t\\t\\t\\t\\t\\t}\\n\\t\\t\\t\\t\\t\\t\\t],\\n\\t\\t\\t\\t\\t\\t\\t # If an object is negated, the negating phrase\\n\\t\\t\\t\\t\\t\\t\\t\\\"negating_phrase\\\": \\\"\\\"\\n\\t\\t\\t\\t\\t\\t}\\n\\t\\t\\t\\t\\t],\\n\\t\\t\\t\\t\\t# Normalized(lower - cased stemmed)version of the theme\\n\\t\\t\\t\\t\\t\\\"normalized\\\": \\\"\\\",\\n\\t\\t\\t\\t\\t# Sentiment for the theme in words\\n\\t\\t\\t\\t\\t\\\"sentiment_polarity\\\": \\\"\\\",\\n\\t\\t\\t\\t\\t# Sentiment for the theme in a float\\n\\t\\t\\t\\t\\t\\\"sentiment_score\\\": ###,\\n\\t\\t\\t\\t\\t# Stemmed version of the theme\\n\\t\\t\\t\\t\\t\\\"stemmed\\\": \\\"\\\",\\n\\t\\t\\t\\t\\t# Relevancy of the theme to the entity\\n\\t\\t\\t\\t\\t\\\"strength_score\\\": ###,\\n\\t\\t\\t\\t\\t# Actual words of the theme\\n\\t\\t\\t\\t\\t\\\"title\\\": \\\"\\\"\\n\\t\\t\\t\\t},\\n\\t\\t\\t\\t # More themes would be found here\\n\\t\\t\\t],\\n\\t\\t\\t# Entity name\\n\\t\\t\\t\\\"title\\\": \\\"Target\\\",\\n\\t\\t\\t# Named entities are automatically discovered, user-entities are defined\\n\\t\\t\\t\\\"type\\\": \\\"named\\\"\\n\\t\\t}\\n\\t],\\n\\t# The relations array lists the relationships found in the text. None were found in this example so the below array is an empty example\\n\\t\\\"relations\\\": [{\\n\\t\\t\\t # Named relations are auto-discovered\\n\\t\\t\\t\\\"type\\\": \\\"\\\",\\n\\t\\t\\t # the words triggering the relationship\\n\\t\\t\\t\\\"extra\\\": \\\"\\\",\\n\\t\\t\\t # The entities involved in the relationship\\n\\t\\t\\t\\\"entities\\\": [{\\n\\t\\t\\t\\t\\t\\\"title\\\": \\\"\\\",\\n\\t\\t\\t\\t\\t\\\"entity_type\\\": \\\"\\\"\\n\\t\\t\\t\\t}\\n\\t\\t\\t],\\n\\t\\t\\t # Type of relationship\\n\\t\\t\\t\\\"relation_type\\\": \\\"\\\",\\n\\t\\t\\t\\\"confidence_score\\\": ###\\n\\t\\t}\\n\\t],\\n\\t# ID of the document\\n\\t\\\"id\\\": \\\"5c8-0001\\\",\\n\\t# The intentions portion of the results. Note that intentions are only offered in English\\n\\t\\\"intentions\\\": [{\\n\\t\\t\\t# Phrase showing intent\\n\\t\\t\\t\\\"evidence_phrase\\\": \\\"can not get\\\",\\n\\t\\t\\t# Intention type\\n\\t\\t\\t\\\"type\\\": \\\"quit\\\",\\n\\t\\t\\t# The what of the intent\\n\\t\\t\\t\\\"what\\\": \\\"them\\\",\\n\\t\\t\\t# The who of the intent\\n\\t\\t\\t\\\"who\\\": \\\"You\\\"\\n\\t\\t}, {\\n\\t\\t\\t\\\"evidence_phrase\\\": \\\"did not get\\\",\\n\\t\\t\\t\\\"type\\\": \\\"quit\\\",\\n\\t\\t\\t\\\"what\\\": \\\"one you\\\",\\n\\t\\t\\t\\\"who\\\": \\\"If you\\\"\\n\\t\\t}\\n\\t],\\n\\t# Language of document\\n\\t\\\"language\\\": \\\"English\\\",\\n\\t# Confidence in the language\\n\\t\\\"language_score\\\": 0.16480173,\\n\\t# Any metadata passed to Semantria would be displayed here. This example did not use metadata\\n\\t\\\"metadata\\\": {\\n\\t},\\n\\t# This dictionary lists the model-based sentiment scores. There were no model-based sentiment scores in this example so the below is intentionally empty \\n\\t\\\"model_sentiment\\\": {\\n\\t\\t # likelihood the document had a mixed sentiment score\\n\\t\\t\\\"mixed_score\\\": ###,\\n\\t\\t # Model name.Semantria ships with a defaultmodel.\\n\\t\\t\\\"model_name\\\": \\\"\\\",\\n\\t\\t # Likelihood the document had a negative score\\n\\t\\t\\\"negative_score\\\": ###,\\n\\t\\t # Likelihood the document had a neutral score\\n\\t\\t\\\"neutral_score\\\": ###,\\n\\t\\t # Likelihood the document had a neutral score\\n\\t\\t\\\"positive_score\\\": ###,\\n\\t\\t # Most likely sentiment polarity in words\\n\\t\\t\\\"sentiment_polarity\\\": \\\"\\\"\\n\\t},\\n\\t# This array lists all sentiment phrases found in the text.\\n\\t\\\"phrases\\\": [ {\\n\\t\\t\\t# Whether the phrase was intensified\\n\\t\\t\\t\\\"is_intensified\\\": false,\\n\\t\\t\\t# Whether the phrase was negated\\n\\t\\t\\t\\\"is_negated\\\": false,\\n\\t\\t\\t# length of phrase in bytes\\n\\t\\t\\t\\\"length\\\": 5,\\n\\t\\t\\t# beginning position of phrase in bytes\\n\\t\\t\\t\\\"offset\\\": 449,\\n\\t\\t\\t# Phrase sentiment in words\\n\\t\\t\\t\\\"sentiment_polarity\\\": \\\"positive\\\",\\n\\t\\t\\t# Phrase sentiment in float\\n\\t\\t\\t\\\"sentiment_score\\\": 0.565,\\n\\t\\t\\t# Actual phrase\\n\\t\\t\\t\\\"title\\\": \\\"loved\\\",\\n\\t\\t\\t# Whether detected or possible\\n\\t\\t\\t\\\"type\\\": \\\"detected\\\"\\n\\t\\t}, {\\n\\t\\t\\t\\\"sentiment_polarity\\\": \\\"neutral\\\",\\n\\t\\t\\t\\\"title\\\": \\\"been sent\\\",\\n\\t\\t\\t# Semantria's suggestions of possible sentiment phrases to add to custom configuration\\n\\t\\t\\t\\\"type\\\": \\\"possible\\\"\\n\\t\\t}, {\\n\\t\\t\\t\\\"sentiment_polarity\\\": \\\"neutral\\\",\\n\\t\\t\\t\\\"title\\\": \\\"certain news\\\",\\n\\t\\t\\t\\\"type\\\": \\\"possible\\\"\\n\\t\\t}, {\\n\\t\\t\\t\\\"sentiment_polarity\\\": \\\"neutral\\\",\\n\\t\\t\\t\\\"title\\\": \\\"different coupon\\\",\\n\\t\\t\\t\\\"type\\\": \\\"possible\\\"\\n\\t\\t}, {\\n\\t\\t\\t\\\"sentiment_polarity\\\": \\\"neutral\\\",\\n\\t\\t\\t\\\"title\\\": \\\"not be\\\",\\n\\t\\t\\t\\\"type\\\": \\\"possible\\\"\\n\\t\\t}, {\\n\\t\\t\\t\\\"sentiment_polarity\\\": \\\"neutral\\\",\\n\\t\\t\\t\\\"title\\\": \\\"not get\\\",\\n\\t\\t\\t\\\"type\\\": \\\"possible\\\"\\n\\t\\t}, {\\n\\t\\t\\t\\\"sentiment_polarity\\\": \\\"neutral\\\",\\n\\t\\t\\t\\\"title\\\": \\\"not get\\\",\\n\\t\\t\\t\\\"type\\\": \\\"possible\\\"\\n\\t\\t}, {\\n\\t\\t\\t\\\"is_intensified\\\": false,\\n\\t\\t\\t\\\"is_negated\\\": false,\\n\\t\\t\\t\\\"length\\\": 11,\\n\\t\\t\\t\\\"offset\\\": 251,\\n\\t\\t\\t\\\"sentiment_polarity\\\": \\\"neutral\\\",\\n\\t\\t\\t\\\"sentiment_score\\\": 0.41,\\n\\t\\t\\t\\\"title\\\": \\\"really good\\\",\\n\\t\\t\\t\\\"type\\\": \\\"detected\\\"\\n\\t\\t}, {\\n\\t\\t\\t\\\"is_intensified\\\": false,\\n\\t\\t\\t\\\"is_negated\\\": false,\\n\\t\\t\\t\\\"length\\\": 8,\\n\\t\\t\\t\\\"offset\\\": 366,\\n\\t\\t\\t\\\"sentiment_polarity\\\": \\\"neutral\\\",\\n\\t\\t\\t\\\"sentiment_score\\\": -0.4,\\n\\t\\t\\t\\\"title\\\": \\\"so wrong\\\",\\n\\t\\t\\t\\\"type\\\": \\\"detected\\\"\\n\\t\\t}\\n\\t],\\n\\t# Sentiment of document in words\\n\\t\\\"sentiment_polarity\\\": \\\"neutral\\\",\\n\\t# Sentiment of document as float\\n\\t\\\"sentiment_score\\\": 0.19166666,\\n\\t# Source text\\n\\t\\\"source_text\\\": \\\"Recently a toy catalog went out. If you did not get one you are SOL. I just called\\\\n the corporate and they were mailed at random and put only in certain news papers. You can\\\\n not get them at the store or ask corporate to send you one. There were some really good\\\\n coupons like this week 25% off one toy. There was a different coupon each week. I think\\\\n that this is so wrong. I for one have a target card just for Christmas shopping and would\\\\n have loved those coupons. And since I have a card I would think a catalog should have\\\\n been sent to me. Guess who will not be shopping at Target this year!\\\",\\n\\t# Semantria status of document\\n\\t\\\"status\\\": \\\"PROCESSED\\\",\\n\\t# Summary of document\\n\\t\\\"summary\\\": \\\"Recently a toy catalog went out... I just called the corporate and they were mailed at random and put only in certain news papers... There were some really good coupons like this week 25% off one toy... \\\",\\n\\t# Array of themes relevant at a document level\\n\\t\\\"themes\\\": [{\\n\\t\\t\\t# Amount of sentiment evidence for this theme\\n\\t\\t\\t\\\"evidence\\\": 7,\\n\\t\\t\\t# Was this theme a focus of the text\\n\\t\\t\\t\\\"is_about\\\": false,\\n\\t\\t\\t# Array of actual mentions of the theme\\n\\t\\t\\t\\\"mentions\\\": [{\\n\\t\\t\\t\\t\\t# Was the mention negated\\n\\t\\t\\t\\t\\t\\\"is_negated\\\": false,\\n\\t\\t\\t\\t\\t# Actual mention\\n\\t\\t\\t\\t\\t\\\"label\\\": \\\"different coupon\\\",\\n\\t\\t\\t\\t\\t# Mention location can be used for hit-highlighting\\n\\t\\t\\t\\t\\t\\\"locations\\\": [{\\n\\t\\t\\t\\t\\t\\t\\t\\\"length\\\": 16,\\n\\t\\t\\t\\t\\t\\t\\t\\\"offset\\\": 316\\n\\t\\t\\t\\t\\t\\t}\\n\\t\\t\\t\\t\\t]\\n\\t\\t\\t\\t}\\n\\t\\t\\t],\\n\\t\\t\\t# Normalized version of theme\\n\\t\\t\\t\\\"normalized\\\": \\\"different coupon\\\",\\n\\t\\t\\t# Sentiment for theme in words\\n\\t\\t\\t\\\"sentiment_polarity\\\": \\\"neutral\\\",\\n\\t\\t\\t# Sentiment fpr theme in a float\\n\\t\\t\\t\\\"sentiment_score\\\": 0.41,\\n\\t\\t\\t# Stemmed version of the theme\\n\\t\\t\\t\\\"stemmed\\\": \\\"different coupon\\\",\\n\\t\\t\\t# Relevancy of the theme\\n\\t\\t\\t\\\"strength_score\\\": 1.25,\\n\\t\\t\\t# Actual words of the theme\\n\\t\\t\\t\\\"title\\\": \\\"different coupon\\\"\\n\\t\\t}, {\\n\\t\\t\\t\\\"evidence\\\": 7,\\n\\t\\t\\t\\\"is_about\\\": true,\\n\\t\\t\\t\\\"mentions\\\": [{\\n\\t\\t\\t\\t\\t\\\"is_negated\\\": false,\\n\\t\\t\\t\\t\\t\\\"label\\\": \\\"good coupons\\\",\\n\\t\\t\\t\\t\\t\\\"locations\\\": [{\\n\\t\\t\\t\\t\\t\\t\\t\\\"length\\\": 13,\\n\\t\\t\\t\\t\\t\\t\\t\\\"offset\\\": 258\\n\\t\\t\\t\\t\\t\\t}\\n\\t\\t\\t\\t\\t]\\n\\t\\t\\t\\t}\\n\\t\\t\\t],\\n\\t\\t\\t\\\"normalized\\\": \\\"good coupon\\\",\\n\\t\\t\\t\\\"sentiment_polarity\\\": \\\"positive\\\",\\n\\t\\t\\t\\\"sentiment_score\\\": 2.58,\\n\\t\\t\\t\\\"stemmed\\\": \\\"good coupon\\\",\\n\\t\\t\\t\\\"strength_score\\\": 1.5833334,\\n\\t\\t\\t\\\"title\\\": \\\"good coupons\\\"\\n\\t\\t}, {\\n\\t\\t\\t\\\"evidence\\\": 4,\\n\\t\\t\\t\\\"is_about\\\": false,\\n\\t\\t\\t\\\"mentions\\\": [{\\n\\t\\t\\t\\t\\t\\\"is_negated\\\": false,\\n\\t\\t\\t\\t\\t\\\"label\\\": \\\"target card\\\",\\n\\t\\t\\t\\t\\t\\\"locations\\\": [{\\n\\t\\t\\t\\t\\t\\t\\t\\\"length\\\": 11,\\n\\t\\t\\t\\t\\t\\t\\t\\\"offset\\\": 393\\n\\t\\t\\t\\t\\t\\t}\\n\\t\\t\\t\\t\\t]\\n\\t\\t\\t\\t}\\n\\t\\t\\t],\\n\\t\\t\\t\\\"normalized\\\": \\\"target card\\\",\\n\\t\\t\\t\\\"sentiment_polarity\\\": \\\"positive\\\",\\n\\t\\t\\t\\\"sentiment_score\\\": 0.565,\\n\\t\\t\\t\\\"stemmed\\\": \\\"target card\\\",\\n\\t\\t\\t\\\"strength_score\\\": 0.2,\\n\\t\\t\\t\\\"title\\\": \\\"target card\\\"\\n\\t\\t}, {\\n\\t\\t\\t\\\"evidence\\\": 4,\\n\\t\\t\\t\\\"is_about\\\": true,\\n\\t\\t\\t\\\"mentions\\\": [{\\n\\t\\t\\t\\t\\t\\\"is_negated\\\": false,\\n\\t\\t\\t\\t\\t\\\"label\\\": \\\"toy catalog\\\",\\n\\t\\t\\t\\t\\t\\\"locations\\\": [{\\n\\t\\t\\t\\t\\t\\t\\t\\\"length\\\": 11,\\n\\t\\t\\t\\t\\t\\t\\t\\\"offset\\\": 11\\n\\t\\t\\t\\t\\t\\t}\\n\\t\\t\\t\\t\\t]\\n\\t\\t\\t\\t}\\n\\t\\t\\t],\\n\\t\\t\\t\\\"normalized\\\": \\\"toy catalog\\\",\\n\\t\\t\\t\\\"sentiment_polarity\\\": \\\"neutral\\\",\\n\\t\\t\\t\\\"sentiment_score\\\": 0.205,\\n\\t\\t\\t\\\"stemmed\\\": \\\"toy catalog\\\",\\n\\t\\t\\t\\\"strength_score\\\": 0.33333334,\\n\\t\\t\\t\\\"title\\\": \\\"toy catalog\\\"\\n\\t\\t}\\n\\t],\\n\\t# This array lists topics discovered in the text. Topics include queries and concept topics (also known as user categories). This example only hit on a query so the below shows the query topic output.\\n\\t\\\"topics\\\": [{\\n\\t\\t\\t# The number of query terms that hit in the document\\n\\t\\t\\t\\\"hitcount\\\": 2,\\n\\t\\t\\t# The ID of the query\\n\\t\\t\\t\\\"id\\\": \\\"4c640e2e-6e0f-44f9-a6c6-eca67f3962f3\\\",\\n\\t\\t\\t# An array listing the term hits\\n\\t\\t\\t\\\"mentions\\\": [{\\n\\t\\t\\t\\t\\t# Whether the term was negated\\n\\t\\t\\t\\t\\t\\\"is_negated\\\": false,\\n\\t\\t\\t\\t\\t# The term that hit\\n\\t\\t\\t\\t\\t\\\"label\\\": \\\"think\\\",\\n\\t\\t\\t\\t\\t# An array of locations of the term\\n\\t\\t\\t\\t\\t\\\"locations\\\": [{\\n\\t\\t\\t\\t\\t\\t\\t# The length in bytes of the term\\n\\t\\t\\t\\t\\t\\t\\t\\\"length\\\": 5,\\n\\t\\t\\t\\t\\t\\t\\t# The offset in bytes from the beginning of the document for the hit\\n\\t\\t\\t\\t\\t\\t\\t\\\"offset\\\": 346\\n\\t\\t\\t\\t\\t\\t}, {\\n\\t\\t\\t\\t\\t\\t\\t\\\"length\\\": 5,\\n\\t\\t\\t\\t\\t\\t\\t\\\"offset\\\": 502\\n\\t\\t\\t\\t\\t\\t}\\n\\t\\t\\t\\t\\t]\\n\\t\\t\\t\\t}\\n\\t\\t\\t],\\n\\t\\t\\t# The sentiment polarity of the query\\n\\t\\t\\t\\\"sentiment_polarity\\\": \\\"neutral\\\",\\n\\t\\t\\t# The sentiment score of the query as a float\\n\\t\\t\\t\\\"sentiment_score\\\": 0.0,\\n\\t\\t\\t# The name of the query\\n\\t\\t\\t\\\"title\\\": \\\"Customer Service\\\",\\n\\t\\t\\t# The type of topic. This can be concept or query\\n\\t\\t\\t\\\"type\\\": \\\"query\\\"\\n\\t\\t}\\n\\t]\\n}\",\n      \"language\": \"json\"\n    }\n  ]\n}\n[/block]\nDetailed mode limits apply to both document mode and source mode of analysis. All limits have integer values of 0 to 20. Setting a limit to a score of 0 signifies zero interest in the output and will prevent the result for that parameter from appearing in the dataset.\n[block:api-header]\n{\n  \"type\": \"basic\",\n  \"title\": \"Detailed Mode output explanation\"\n}\n[/block]\nSemantria provides the user with a wealth of information in its sentiment analysis and data processing; sometimes it can be kind of hard to wade through. Here is a quick reference detailing everything the Semantria API will return to the user in Detailed Analysis Mode.\n\nEach document will have an *id* and each configuration has a unique *config_id*. The user can add *tags* and view the *status* of the document (\"queued,\" \"processed\" or \"failed\"). Semantria API will produce a *job_id* of the associated job, a *summary* of the document text, the *language* of the source text (and the *language score*, the percentage of the best language match among detected languages), and the *sentiment_score* and *sentiment_polarity*. \n\nIn detailed analysis of individual sentences, the API will return boolean values for *is_imperative* and *is_polar*. Imperative sentences, representing an action item, will be set to true. *is_polar* represents Semantria's guess as to whether the writer of the sentence meant to convey sentiment. For instance, \"Good morning all\" is a non-polar sentence despite containing a sentiment word of \"good.\"\n\nThe API will return a list of words grouped by the parent sentence. Each word will have a *tag*, POS *type*,* title*, *stemmed* form of the word, and *sentiment_score*.\n\nSemantria API will generate *auto_categories*; each category will have a* title*,* type* (\"node\"/root or \"leaf\"/nested value), *strength_score* (how much the category matches with document content), and *categories*, an array of sub-categories (if any exist).\n\n*phrases* are a list of sentiment-bearing phrases from the document. Each will have a *title, sentiment_score, sentiment_polarity* (negative, positive, or neutral),* is_negated* (whether the phrase has been negated), *negating_phrase* (if one exists),* is_intensified, intensifying_phrase* (if one exists), and *type* (either \"possible\" or \"detected\").\n\nThe Semantria API returns the *themes* of the document. Each has the *title*, main theme (*is_about*), the *normalized* form of the theme, the *stemmed* form of the theme, an *evidence* score, *strength_score* within the document, and *sentiment_polarity*. The API will return *mentions* of the theme: *expandable*, which is the text of the theme mention, *is_negated, negating_phrase*, and* locations*-- the list of coordinates of the mentions found within the document. *offset* is the number of bytes offset in the original text before the start of the mention, and *length* is the length of the mention in bytes.\n\nThe API returns entities with similar parameters to themes. Entities have additional parameters of *type* (either \"named\" or \"user\"),* confident* (whether the confidence queries matched for this entity), and the *entity_type* (Company, Person, Place, etc.). It will also return a list of themes related to this entity.\n\nSemantria API returns relations, which represent a connection between one or more Entities. These have a *type* (named or user value), *relation_type* (such as quotation), *confidence_score,  and extra* of the parent relationship.\n\nThe API will also return a list of opinions extracted from the source text. Each will have a *quotation, type* (the type of entity extracted-- named or user value), *speaker, topic, sentiment_score* and *sentiment_polarity*.\n\nFinally, Semantria API gives a list of topics, each with a *title, type, hitcount, strength_score, sentiment_score, sentiment_score* and* topics* (a list of sub-topics, if they exist).\n[block:api-header]\n{\n  \"type\": \"basic\",\n  \"title\": \"API Options\"\n}\n[/block]\n\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Option\",\n    \"h-1\": \"Description\",\n    \"h-2\": \"Default\",\n    \"0-0\": \"auto_response\",\n    \"1-0\": \"is_primary\",\n    \"2-0\": \"chars_threshold\",\n    \"3-0\": \"one_sentence\",\n    \"4-0\": \"process_html\",\n    \"5-0\": \"language\",\n    \"6-0\": \"callback\",\n    \"0-2\": \"False\",\n    \"1-2\": \"False\",\n    \"2-2\": \"80\",\n    \"3-2\": \"False\",\n    \"4-2\": \"False\",\n    \"5-2\": \"English\",\n    \"6-2\": \"Empty\",\n    \"6-1\": \"Defines a callback URL for automatic data responding (more info).\",\n    \"5-1\": \"Defines target language that will be used for task processing.\",\n    \"4-1\": \"Leads the service to clean HTML tags before processing.\",\n    \"3-1\": \"Leads the service to clean HTML tags before processing.\",\n    \"2-1\": \"Defines whether or not the service should respond with processed results on each incoming analytics document or discovery mode request.\",\n    \"1-1\": \"Identifies whether the current configuration is primary or not.\",\n    \"0-1\": \"Defines whether or not the service should respond with processed results on each incoming analytics document or discovery analysis request (more info).\"\n  },\n  \"cols\": 3,\n  \"rows\": 7\n}\n[/block]","excerpt":"","slug":"output","type":"basic","title":"Detailed API Output Explanation"}

Detailed API Output Explanation


[block:api-header] { "type": "basic", "title": "Detailed Mode Output" } [/block] Detailed Mode performs analysis on individual documents. In the Semantria API the user can customize almost every part of the analysis; from constraining the number of results for each category to defining the parts of speech which the server will detect, the user can configure Detailed Mode to suit your needs in document sentiment analysis. In this section, we provide a quick reference for customizable options and parameters for POS tagging, as well as a detailed explanation of Detailed Mode's output. [block:api-header] { "type": "basic", "title": "Line-by-line Term Explanation" } [/block] This output is from analyzing the text below. Take note that it has been abbreviated is some spots for clarity. *Recently a toy catalog went out. If you did not get one you are SOL. I just called the corporate and they were mailed at random and put only in certain news papers. You can not get them at the store or ask corporate to send you one. There were some really good coupons like this week 25% off one toy. There was a different coupon each week. I think that this is so wrong. I for one have a target card just for Christmas shopping and would have loved those coupons. And since I have a card I would think a catalog should have been sent to me. Guess who will not be shopping at Target this year!* [block:code] { "codes": [ { "code": "{\n\t# This array shows all the auto categories (wikipedia categories) found in the text\n\t\"auto_categories\": [{ \n\t\t\t# These are the categories within the auto category. Note that not all auto categories will have categories\n\t\t\t\"categories\": [{\n\t\t\t\t\t\"sentiment_polarity\": \"neutral\",\n\t\t\t\t\t\"sentiment_score\": 0.4875,\n\t\t\t\t\t\"strength_score\": 0.5772358,\n\t\t\t\t\t\"title\": \"Bonds\",\n\t\t\t\t\t\"type\": \"info\"\n\t\t\t\t}, {\n\t\t\t\t\t\"sentiment_polarity\": \"neutral\",\n\t\t\t\t\t\"sentiment_score\": 0.4875,\n\t\t\t\t\t\"strength_score\": 0.523945,\n\t\t\t\t\t\"title\": \"Discount_stores\",\n\t\t\t\t\t\"type\": \"info\"\n\t\t\t\t}, {\n\t\t\t\t\t\"sentiment_polarity\": \"neutral\",\n\t\t\t\t\t\"sentiment_score\": 0.4875,\n\t\t\t\t\t\"strength_score\": 0.5088412,\n\t\t\t\t\t\"title\": \"Government_bonds\",\n\t\t\t\t\t\"type\": \"info\"\n\t\t\t\t}, {\n\t\t\t\t\t\"sentiment_polarity\": \"neutral\",\n\t\t\t\t\t\"sentiment_score\": 0.4875,\n\t\t\t\t\t\"strength_score\": 0.46849313,\n\t\t\t\t\t\"title\": \"Private_currencies\",\n\t\t\t\t\t\"type\": \"info\"\n\t\t\t\t}\n\t\t\t],\n\t\t\t# This is the sentiment polarity for the auto category\n\t\t\t\"sentiment_polarity\": \"neutral\",\n\t\t\t# This is the sentiment score for the auto category\n\t\t\t\"sentiment_score\": 0.4875,\n\t\t\t# This is the relevance score for the auto category\n\t\t\t\"strength_score\": 0.8387881,\n\t\t\t# This is the title of the auto category\n\t\t\t\"title\": \"Business\",\n\t\t\t# This is the type of auto category - node for a category that can contain other categories (as this example does), leaf for categories at the end of the tree\n\t\t\t\"type\": \"node\"\n\t\t}\n\t],\n\t# This is the ID of the config used to process the data\n\t\"config_id\": \"486e1e63-2748-4a38-a457-862d92219fec\",\n\t# This array give the details of the document. Each element in the array is a sentence. Only a single sentence from the example text is shown here due to length. \n\t\"details\": [{\n\t\t\t# If the sentence is imperative or not\n\t\t\t\"is_imperative\": false,\n\t\t\t# If the sentence should carry sentiment\n\t\t\t\"is_polar\": true,\n\t\t\t# This array lists all of the words in the sentence\n\t\t\t\"words\": [{\n\t\t\t\t\t# Was the word negated by a negator\n\t\t\t\t\t\"is_negated\": false,\n\t\t\t\t\t# Sentiment score for the word\n\t\t\t\t\t\"sentiment_score\": 0.0,\n\t\t\t\t\t# Stemmed form of the word\n\t\t\t\t\t\"stemmed\": \"recent\",\n\t\t\t\t\t# Part of speech tag. See http://dev.lexalytics.com/wiki/pmwiki.php?n=Main.POSTags for all tags\n\t\t\t\t\t\"tag\": \"RB\",\n\t\t\t\t\t# Actual word\n\t\t\t\t\t\"title\": \"Recently\",\n\t\t\t\t\t# Normalized part of speech tag\n\t\t\t\t\t\"type\": \"Adjective\"\n\t\t\t\t}, {\n\t\t\t\t\t\"is_negated\": false,\n\t\t\t\t\t\"sentiment_score\": 0.0,\n\t\t\t\t\t\"stemmed\": \"a\",\n\t\t\t\t\t\"tag\": \"DT\",\n\t\t\t\t\t\"title\": \"a\",\n\t\t\t\t\t\"type\": \"Determiner\"\n\t\t\t\t}, {\n\t\t\t\t\t\"is_negated\": false,\n\t\t\t\t\t\"sentiment_score\": 0.0,\n\t\t\t\t\t\"stemmed\": \"toy\",\n\t\t\t\t\t\"tag\": \"NN\",\n\t\t\t\t\t\"title\": \"toy\",\n\t\t\t\t\t\"type\": \"Noun\"\n\t\t\t\t}, {\n\t\t\t\t\t\"is_negated\": false,\n\t\t\t\t\t\"sentiment_score\": 0.0,\n\t\t\t\t\t\"stemmed\": \"catalog\",\n\t\t\t\t\t\"tag\": \"NN\",\n\t\t\t\t\t\"title\": \"catalog\",\n\t\t\t\t\t\"type\": \"Noun\"\n\t\t\t\t}, {\n\t\t\t\t\t\"is_negated\": false,\n\t\t\t\t\t\"sentiment_score\": 0.0,\n\t\t\t\t\t\"stemmed\": \"go\",\n\t\t\t\t\t\"tag\": \"VBD\",\n\t\t\t\t\t\"title\": \"went\",\n\t\t\t\t\t\"type\": \"Verb\"\n\t\t\t\t}, {\n\t\t\t\t\t\"is_negated\": false,\n\t\t\t\t\t\"sentiment_score\": 0.0,\n\t\t\t\t\t\"stemmed\": \"out\",\n\t\t\t\t\t\"tag\": \"RP\",\n\t\t\t\t\t\"title\": \"out\",\n\t\t\t\t\t\"type\": \"Misc\"\n\t\t\t\t}, {\n\t\t\t\t\t\"is_negated\": false,\n\t\t\t\t\t\"sentiment_score\": 0.0,\n\t\t\t\t\t\"stemmed\": \".\",\n\t\t\t\t\t\"tag\": \".\",\n\t\t\t\t\t\"title\": \".\",\n\t\t\t\t\t\"type\": \"Symbol\"\n\t\t\t\t}\n\t\t\t]\n\t\t}, {\n\t\t\t# Other sentences omitted for length\n\t\t}\n\t],\n\t# This array lists all entities found\n\t\"entities\": [{\n\t\t\t# Did the entity match the optional confidence query\n\t\t\t\"confident\": true,\n\t\t\t# What type of entity is it\n\t\t\t\"entity_type\": \"Company\",\n\t\t\t# How much sentiment evidence is there?\n\t\t\t\"evidence\": 1,\n\t\t\t# Was this entity a focus of the text?\n\t\t\t\"is_about\": false,\n\t\t\t# The label of the entitiy. This can be overridden in user-defined entities.\n\t\t\t\"label\": \"Company\",\n\t\t\t# Array of actual mentions of the entity.\n\t\t\t\"mentions\": [{\n\t\t\t\t\t# Was the entity negated ?\n\t\t\t\t\t\"is_negated\": false,\n\t\t\t\t\t# Actual word found in text\n\t\t\t\t\t\"label\": \"Target\",\n\t\t\t\t\t# Locations info can be ued for hit-highlighting.\n\t\t\t\t\t\"locations\": [{\n\t\t\t\t\t\t\t\"length\": 6,\n\t\t\t\t\t\t\t\"offset\": 582\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t],\n\t\t\t# Sentiment for the entity in words\n\t\t\t\"sentiment_polarity\": \"neutral\",\n\t\t\t# Sentiment for the entity as a float\n\t\t\t\"sentiment_score\": 0.0,\n\t\t\t# This entity does not have themes, but if it did this is where you would see the themes associated with the entity\n\t\t\t\"themes\": [{\n\t\t\t\t\t# Amount of sentiment evidence for this theme\n\t\t\t\t\t\"evidence\": ###,\n\t\t\t\t\t# Is this theme a focus of the text?\n\t\t\t\t\t\"is_about\" : (true or false),\n\t\t\t\t\t# Array of actual mentions of the theme\n\t\t\t\t\t\"mentions\": [{\n\t\t\t\t\t\t\t\"is_negated\": (true or false),\n\t\t\t\t\t\t\t\"label\": \"\",\n\t\t\t\t\t\t\t\"locations\": [{\n\t\t\t\t\t\t\t\t\t\"length\": ###,\n\t\t\t\t\t\t\t\t\t\"offset\": ###\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t # If an object is negated, the negating phrase\n\t\t\t\t\t\t\t\"negating_phrase\": \"\"\n\t\t\t\t\t\t}\n\t\t\t\t\t],\n\t\t\t\t\t# Normalized(lower - cased stemmed)version of the theme\n\t\t\t\t\t\"normalized\": \"\",\n\t\t\t\t\t# Sentiment for the theme in words\n\t\t\t\t\t\"sentiment_polarity\": \"\",\n\t\t\t\t\t# Sentiment for the theme in a float\n\t\t\t\t\t\"sentiment_score\": ###,\n\t\t\t\t\t# Stemmed version of the theme\n\t\t\t\t\t\"stemmed\": \"\",\n\t\t\t\t\t# Relevancy of the theme to the entity\n\t\t\t\t\t\"strength_score\": ###,\n\t\t\t\t\t# Actual words of the theme\n\t\t\t\t\t\"title\": \"\"\n\t\t\t\t},\n\t\t\t\t # More themes would be found here\n\t\t\t],\n\t\t\t# Entity name\n\t\t\t\"title\": \"Target\",\n\t\t\t# Named entities are automatically discovered, user-entities are defined\n\t\t\t\"type\": \"named\"\n\t\t}\n\t],\n\t# The relations array lists the relationships found in the text. None were found in this example so the below array is an empty example\n\t\"relations\": [{\n\t\t\t # Named relations are auto-discovered\n\t\t\t\"type\": \"\",\n\t\t\t # the words triggering the relationship\n\t\t\t\"extra\": \"\",\n\t\t\t # The entities involved in the relationship\n\t\t\t\"entities\": [{\n\t\t\t\t\t\"title\": \"\",\n\t\t\t\t\t\"entity_type\": \"\"\n\t\t\t\t}\n\t\t\t],\n\t\t\t # Type of relationship\n\t\t\t\"relation_type\": \"\",\n\t\t\t\"confidence_score\": ###\n\t\t}\n\t],\n\t# ID of the document\n\t\"id\": \"5c8-0001\",\n\t# The intentions portion of the results. Note that intentions are only offered in English\n\t\"intentions\": [{\n\t\t\t# Phrase showing intent\n\t\t\t\"evidence_phrase\": \"can not get\",\n\t\t\t# Intention type\n\t\t\t\"type\": \"quit\",\n\t\t\t# The what of the intent\n\t\t\t\"what\": \"them\",\n\t\t\t# The who of the intent\n\t\t\t\"who\": \"You\"\n\t\t}, {\n\t\t\t\"evidence_phrase\": \"did not get\",\n\t\t\t\"type\": \"quit\",\n\t\t\t\"what\": \"one you\",\n\t\t\t\"who\": \"If you\"\n\t\t}\n\t],\n\t# Language of document\n\t\"language\": \"English\",\n\t# Confidence in the language\n\t\"language_score\": 0.16480173,\n\t# Any metadata passed to Semantria would be displayed here. This example did not use metadata\n\t\"metadata\": {\n\t},\n\t# This dictionary lists the model-based sentiment scores. There were no model-based sentiment scores in this example so the below is intentionally empty \n\t\"model_sentiment\": {\n\t\t # likelihood the document had a mixed sentiment score\n\t\t\"mixed_score\": ###,\n\t\t # Model name.Semantria ships with a defaultmodel.\n\t\t\"model_name\": \"\",\n\t\t # Likelihood the document had a negative score\n\t\t\"negative_score\": ###,\n\t\t # Likelihood the document had a neutral score\n\t\t\"neutral_score\": ###,\n\t\t # Likelihood the document had a neutral score\n\t\t\"positive_score\": ###,\n\t\t # Most likely sentiment polarity in words\n\t\t\"sentiment_polarity\": \"\"\n\t},\n\t# This array lists all sentiment phrases found in the text.\n\t\"phrases\": [ {\n\t\t\t# Whether the phrase was intensified\n\t\t\t\"is_intensified\": false,\n\t\t\t# Whether the phrase was negated\n\t\t\t\"is_negated\": false,\n\t\t\t# length of phrase in bytes\n\t\t\t\"length\": 5,\n\t\t\t# beginning position of phrase in bytes\n\t\t\t\"offset\": 449,\n\t\t\t# Phrase sentiment in words\n\t\t\t\"sentiment_polarity\": \"positive\",\n\t\t\t# Phrase sentiment in float\n\t\t\t\"sentiment_score\": 0.565,\n\t\t\t# Actual phrase\n\t\t\t\"title\": \"loved\",\n\t\t\t# Whether detected or possible\n\t\t\t\"type\": \"detected\"\n\t\t}, {\n\t\t\t\"sentiment_polarity\": \"neutral\",\n\t\t\t\"title\": \"been sent\",\n\t\t\t# Semantria's suggestions of possible sentiment phrases to add to custom configuration\n\t\t\t\"type\": \"possible\"\n\t\t}, {\n\t\t\t\"sentiment_polarity\": \"neutral\",\n\t\t\t\"title\": \"certain news\",\n\t\t\t\"type\": \"possible\"\n\t\t}, {\n\t\t\t\"sentiment_polarity\": \"neutral\",\n\t\t\t\"title\": \"different coupon\",\n\t\t\t\"type\": \"possible\"\n\t\t}, {\n\t\t\t\"sentiment_polarity\": \"neutral\",\n\t\t\t\"title\": \"not be\",\n\t\t\t\"type\": \"possible\"\n\t\t}, {\n\t\t\t\"sentiment_polarity\": \"neutral\",\n\t\t\t\"title\": \"not get\",\n\t\t\t\"type\": \"possible\"\n\t\t}, {\n\t\t\t\"sentiment_polarity\": \"neutral\",\n\t\t\t\"title\": \"not get\",\n\t\t\t\"type\": \"possible\"\n\t\t}, {\n\t\t\t\"is_intensified\": false,\n\t\t\t\"is_negated\": false,\n\t\t\t\"length\": 11,\n\t\t\t\"offset\": 251,\n\t\t\t\"sentiment_polarity\": \"neutral\",\n\t\t\t\"sentiment_score\": 0.41,\n\t\t\t\"title\": \"really good\",\n\t\t\t\"type\": \"detected\"\n\t\t}, {\n\t\t\t\"is_intensified\": false,\n\t\t\t\"is_negated\": false,\n\t\t\t\"length\": 8,\n\t\t\t\"offset\": 366,\n\t\t\t\"sentiment_polarity\": \"neutral\",\n\t\t\t\"sentiment_score\": -0.4,\n\t\t\t\"title\": \"so wrong\",\n\t\t\t\"type\": \"detected\"\n\t\t}\n\t],\n\t# Sentiment of document in words\n\t\"sentiment_polarity\": \"neutral\",\n\t# Sentiment of document as float\n\t\"sentiment_score\": 0.19166666,\n\t# Source text\n\t\"source_text\": \"Recently a toy catalog went out. If you did not get one you are SOL. I just called\\n the corporate and they were mailed at random and put only in certain news papers. You can\\n not get them at the store or ask corporate to send you one. There were some really good\\n coupons like this week 25% off one toy. There was a different coupon each week. I think\\n that this is so wrong. I for one have a target card just for Christmas shopping and would\\n have loved those coupons. And since I have a card I would think a catalog should have\\n been sent to me. Guess who will not be shopping at Target this year!\",\n\t# Semantria status of document\n\t\"status\": \"PROCESSED\",\n\t# Summary of document\n\t\"summary\": \"Recently a toy catalog went out... I just called the corporate and they were mailed at random and put only in certain news papers... There were some really good coupons like this week 25% off one toy... \",\n\t# Array of themes relevant at a document level\n\t\"themes\": [{\n\t\t\t# Amount of sentiment evidence for this theme\n\t\t\t\"evidence\": 7,\n\t\t\t# Was this theme a focus of the text\n\t\t\t\"is_about\": false,\n\t\t\t# Array of actual mentions of the theme\n\t\t\t\"mentions\": [{\n\t\t\t\t\t# Was the mention negated\n\t\t\t\t\t\"is_negated\": false,\n\t\t\t\t\t# Actual mention\n\t\t\t\t\t\"label\": \"different coupon\",\n\t\t\t\t\t# Mention location can be used for hit-highlighting\n\t\t\t\t\t\"locations\": [{\n\t\t\t\t\t\t\t\"length\": 16,\n\t\t\t\t\t\t\t\"offset\": 316\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t],\n\t\t\t# Normalized version of theme\n\t\t\t\"normalized\": \"different coupon\",\n\t\t\t# Sentiment for theme in words\n\t\t\t\"sentiment_polarity\": \"neutral\",\n\t\t\t# Sentiment fpr theme in a float\n\t\t\t\"sentiment_score\": 0.41,\n\t\t\t# Stemmed version of the theme\n\t\t\t\"stemmed\": \"different coupon\",\n\t\t\t# Relevancy of the theme\n\t\t\t\"strength_score\": 1.25,\n\t\t\t# Actual words of the theme\n\t\t\t\"title\": \"different coupon\"\n\t\t}, {\n\t\t\t\"evidence\": 7,\n\t\t\t\"is_about\": true,\n\t\t\t\"mentions\": [{\n\t\t\t\t\t\"is_negated\": false,\n\t\t\t\t\t\"label\": \"good coupons\",\n\t\t\t\t\t\"locations\": [{\n\t\t\t\t\t\t\t\"length\": 13,\n\t\t\t\t\t\t\t\"offset\": 258\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t],\n\t\t\t\"normalized\": \"good coupon\",\n\t\t\t\"sentiment_polarity\": \"positive\",\n\t\t\t\"sentiment_score\": 2.58,\n\t\t\t\"stemmed\": \"good coupon\",\n\t\t\t\"strength_score\": 1.5833334,\n\t\t\t\"title\": \"good coupons\"\n\t\t}, {\n\t\t\t\"evidence\": 4,\n\t\t\t\"is_about\": false,\n\t\t\t\"mentions\": [{\n\t\t\t\t\t\"is_negated\": false,\n\t\t\t\t\t\"label\": \"target card\",\n\t\t\t\t\t\"locations\": [{\n\t\t\t\t\t\t\t\"length\": 11,\n\t\t\t\t\t\t\t\"offset\": 393\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t],\n\t\t\t\"normalized\": \"target card\",\n\t\t\t\"sentiment_polarity\": \"positive\",\n\t\t\t\"sentiment_score\": 0.565,\n\t\t\t\"stemmed\": \"target card\",\n\t\t\t\"strength_score\": 0.2,\n\t\t\t\"title\": \"target card\"\n\t\t}, {\n\t\t\t\"evidence\": 4,\n\t\t\t\"is_about\": true,\n\t\t\t\"mentions\": [{\n\t\t\t\t\t\"is_negated\": false,\n\t\t\t\t\t\"label\": \"toy catalog\",\n\t\t\t\t\t\"locations\": [{\n\t\t\t\t\t\t\t\"length\": 11,\n\t\t\t\t\t\t\t\"offset\": 11\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t],\n\t\t\t\"normalized\": \"toy catalog\",\n\t\t\t\"sentiment_polarity\": \"neutral\",\n\t\t\t\"sentiment_score\": 0.205,\n\t\t\t\"stemmed\": \"toy catalog\",\n\t\t\t\"strength_score\": 0.33333334,\n\t\t\t\"title\": \"toy catalog\"\n\t\t}\n\t],\n\t# This array lists topics discovered in the text. Topics include queries and concept topics (also known as user categories). This example only hit on a query so the below shows the query topic output.\n\t\"topics\": [{\n\t\t\t# The number of query terms that hit in the document\n\t\t\t\"hitcount\": 2,\n\t\t\t# The ID of the query\n\t\t\t\"id\": \"4c640e2e-6e0f-44f9-a6c6-eca67f3962f3\",\n\t\t\t# An array listing the term hits\n\t\t\t\"mentions\": [{\n\t\t\t\t\t# Whether the term was negated\n\t\t\t\t\t\"is_negated\": false,\n\t\t\t\t\t# The term that hit\n\t\t\t\t\t\"label\": \"think\",\n\t\t\t\t\t# An array of locations of the term\n\t\t\t\t\t\"locations\": [{\n\t\t\t\t\t\t\t# The length in bytes of the term\n\t\t\t\t\t\t\t\"length\": 5,\n\t\t\t\t\t\t\t# The offset in bytes from the beginning of the document for the hit\n\t\t\t\t\t\t\t\"offset\": 346\n\t\t\t\t\t\t}, {\n\t\t\t\t\t\t\t\"length\": 5,\n\t\t\t\t\t\t\t\"offset\": 502\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t],\n\t\t\t# The sentiment polarity of the query\n\t\t\t\"sentiment_polarity\": \"neutral\",\n\t\t\t# The sentiment score of the query as a float\n\t\t\t\"sentiment_score\": 0.0,\n\t\t\t# The name of the query\n\t\t\t\"title\": \"Customer Service\",\n\t\t\t# The type of topic. This can be concept or query\n\t\t\t\"type\": \"query\"\n\t\t}\n\t]\n}", "language": "json" } ] } [/block] Detailed mode limits apply to both document mode and source mode of analysis. All limits have integer values of 0 to 20. Setting a limit to a score of 0 signifies zero interest in the output and will prevent the result for that parameter from appearing in the dataset. [block:api-header] { "type": "basic", "title": "Detailed Mode output explanation" } [/block] Semantria provides the user with a wealth of information in its sentiment analysis and data processing; sometimes it can be kind of hard to wade through. Here is a quick reference detailing everything the Semantria API will return to the user in Detailed Analysis Mode. Each document will have an *id* and each configuration has a unique *config_id*. The user can add *tags* and view the *status* of the document ("queued," "processed" or "failed"). Semantria API will produce a *job_id* of the associated job, a *summary* of the document text, the *language* of the source text (and the *language score*, the percentage of the best language match among detected languages), and the *sentiment_score* and *sentiment_polarity*. In detailed analysis of individual sentences, the API will return boolean values for *is_imperative* and *is_polar*. Imperative sentences, representing an action item, will be set to true. *is_polar* represents Semantria's guess as to whether the writer of the sentence meant to convey sentiment. For instance, "Good morning all" is a non-polar sentence despite containing a sentiment word of "good." The API will return a list of words grouped by the parent sentence. Each word will have a *tag*, POS *type*,* title*, *stemmed* form of the word, and *sentiment_score*. Semantria API will generate *auto_categories*; each category will have a* title*,* type* ("node"/root or "leaf"/nested value), *strength_score* (how much the category matches with document content), and *categories*, an array of sub-categories (if any exist). *phrases* are a list of sentiment-bearing phrases from the document. Each will have a *title, sentiment_score, sentiment_polarity* (negative, positive, or neutral),* is_negated* (whether the phrase has been negated), *negating_phrase* (if one exists),* is_intensified, intensifying_phrase* (if one exists), and *type* (either "possible" or "detected"). The Semantria API returns the *themes* of the document. Each has the *title*, main theme (*is_about*), the *normalized* form of the theme, the *stemmed* form of the theme, an *evidence* score, *strength_score* within the document, and *sentiment_polarity*. The API will return *mentions* of the theme: *expandable*, which is the text of the theme mention, *is_negated, negating_phrase*, and* locations*-- the list of coordinates of the mentions found within the document. *offset* is the number of bytes offset in the original text before the start of the mention, and *length* is the length of the mention in bytes. The API returns entities with similar parameters to themes. Entities have additional parameters of *type* (either "named" or "user"),* confident* (whether the confidence queries matched for this entity), and the *entity_type* (Company, Person, Place, etc.). It will also return a list of themes related to this entity. Semantria API returns relations, which represent a connection between one or more Entities. These have a *type* (named or user value), *relation_type* (such as quotation), *confidence_score, and extra* of the parent relationship. The API will also return a list of opinions extracted from the source text. Each will have a *quotation, type* (the type of entity extracted-- named or user value), *speaker, topic, sentiment_score* and *sentiment_polarity*. Finally, Semantria API gives a list of topics, each with a *title, type, hitcount, strength_score, sentiment_score, sentiment_score* and* topics* (a list of sub-topics, if they exist). [block:api-header] { "type": "basic", "title": "API Options" } [/block] [block:parameters] { "data": { "h-0": "Option", "h-1": "Description", "h-2": "Default", "0-0": "auto_response", "1-0": "is_primary", "2-0": "chars_threshold", "3-0": "one_sentence", "4-0": "process_html", "5-0": "language", "6-0": "callback", "0-2": "False", "1-2": "False", "2-2": "80", "3-2": "False", "4-2": "False", "5-2": "English", "6-2": "Empty", "6-1": "Defines a callback URL for automatic data responding (more info).", "5-1": "Defines target language that will be used for task processing.", "4-1": "Leads the service to clean HTML tags before processing.", "3-1": "Leads the service to clean HTML tags before processing.", "2-1": "Defines whether or not the service should respond with processed results on each incoming analytics document or discovery mode request.", "1-1": "Identifies whether the current configuration is primary or not.", "0-1": "Defines whether or not the service should respond with processed results on each incoming analytics document or discovery analysis request (more info)." }, "cols": 3, "rows": 7 } [/block]