Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment & Arguments
The amendment filed 7/03/2025 has been accepted and considered in this office action. Applicant amended claims 1-2, 5-6 and 9-20.
Regarding a rejection to claims under 35 U.S.C. §101 have been fully considered but these are not persuasive. Applicant asserts (remark page 9 end, and page 10 begin) claims recites several steps and “…. and assessment-based user interface modification do not exist outside of the context of a computing environment.”). Examiner notes that each steps of claims human can analyze it after drawing on a sheet of paper and examiner notes claim recites all computing component is merely generic computer components. The claim does not recite any significant component which leads to claim patentability. Applicant asserts example 37 {remark page 10, last paragraph}, examiner respectfully disagree with applicant argument regarding example 37. The claim is different than example 37 GUI claim. GUI mentioned in example 37 is about rearranging icon of the GUI. That is entirely different than instant claim which recites menu toolbar has contradiction tab. .For instant application, human can still analyze the text tool bar using human mind and human analysis. Human can draw tool bar with different tab on a paper using pencil and show different tabs, and one of the tabs is showing data/information discrepancies with another tab. Also, this displaying tab is post-solution activity. The entire claim leads toward abstract idea.
Applicant asserts second point consider claim “as a whole” {remark page 11}, examiner disagree with applicant’s assertion. Examiner considered claim as a whole, but still can see human analyzing claim concept and these claims are human mental process, but including generic element like “processor”, memory” and “display”. Human can draw menu tool bar on the paper using pen, and give tab name. One tab has transcript and another tab labeled as a anomaly detection and that tab contain anomaly information. There is no additional element which leads claims patent eligible. Therefore, claims are recited abstract idea.
Regarding a rejection to claims under 35 U.S.C. §103 have been fully considered but they are not persuasive. Examiner disagree with applicant assertion {remark page 12-14}. Examiner mapped newly amended claims using same 103 combination. Please see claim mapping below. Examiner uses three arts (103 combination) for claim mapping Fox, KLOETZER and Mann. These arts combination teach amended claims, please see mapping below. Mann reference teaches all amended claims. Fig. 44- teaches transcription with GUI and also, Fig. 68-73, shows interface with faulty automation and corresponding paragraph are discussing how and when irregularities happen and how to notify user and resolve it using automation tool. KLOETZER teaches contradiction answer in paragraph 33. .“detecting mutually contradicting expressions as answers to the question sentence from documents …”) Also, Fig. 3 teaches contradiction pattern pair classifying unit 80, in paragraph [0038], and teaches an SVM (Support Vector Machine) 104 functioning as a classifier which gives score of the contradiction pair. Examiner brings 3rd reference Mann which is displays GUI in Fig. 44, and Mann teaches transcript analysis and automation tool for user. Also, Please see examiner’s detail mapping below.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Regarding claim(s) 1, 11, and 16, the limitation(s) of “receiving”, “ingesting”, “generating”, “contextualizing”, “causing display” “executing”, and “receiving”, “determining”, “causing display” as drafted, are processes that, under broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. More specifically, receiving transcription file that contains question and answer, merely data gathering. the mental process of a human can write transcription file/text document using pen and paper and group all sentence/query/word based on user information such as name, place and various information about documents, human can draw/create a table/ metric set on a piece of paper based on user metadata, similar words/sentences can be drawn/write closely in a space on a piece of paper, looking at the table that contains sentences and words, human can decide queries/sentences/words are similar or same meaning or not or how inconsistence in the similar queries/sentences/words. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the --Mental Processes-- grouping of abstract ideas. The claim is not only recited mental process but also involved mathematical algorithm which is “vector space modeling”. Also, Human can draw tab/pane in a notebook and give a heading on top of the page, which page contain what categorical text and given name as first and second tab. Some categorical text/keyword may be information or data discrepancies, and human can analyze that as stated in amended claim. Displaying text is a post solution activity on a user interface. The claim dealing with mental process and mathematical algorithm, and both are abstract idea. Accordingly, the claim recites an abstract idea.
This judicial exception is not integrated into a practical application because the recitation of a “system”, “memory”, “computer-readable storage medium”, “processor” and “display” and “user interface” in claim 1, 11 and claim 16 reads to generalized computer components, based upon the claim interpretation wherein the structure is interpreted using [0019], [0024] and [0035], [0037] in the specification. Accordingly, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional element of using generalized computer components to “receiving”, “ingesting”, “generating”, “contextualizing”, “executing”, structure and display amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible.
With respect to claim(s) 2, the claim(s) recite(s) “storing” transcription data file in a memory. This reads on a generalized computer component as per [0035] in the specification.
With respect to claim(s) 3 the claim(s) recite(s) “contradiction detection assessment”, which reads on a human writing down a transcription on a piece of paper and evaluating contradiction of written text continuously and correcting/updating written text as new text receiving. No additional limitations are present.
With respect to claim(s) 4 the claim(s) recite(s) “receiving a plurality of transcripts”, which reads on a human multiple text document contain a transcription file which is merely data gathering. No additional limitations are present.
With respect to claim(s) 5, and 12, 17, the claim(s) recite(s) “generating response to a query”, recites in human activity using pen and paper, human write down transcription file on a piece of paper using pen and pencil, and then searching query for response on a paper leads to similar query exist on the paper. No additional limitations are present.
With respect to claim(s) 6, 14, and 19, the claim(s) recite(s) “clustering”, recites in human activity using pen and paper, human write down transcription file on a piece of paper using pen and pencil, and group/cluster text document based on similarity and common theme. Clustering/grouping text document is well known concept in the art.
With respect to claim(s) 7, 15, and 20, the claim(s) recite(s) “clustering”, recites in human activity using pen and paper, human write down transcription file on a piece of paper using pen and pencil, and group/cluster text document based on similarity and common theme. Also, clusters can be further broken down to subcluster that represent subtheme of parent theme. No additional limitations are present.
With respect to claim(s) 8, the claim(s) recite(s) “a dimension of the vector space modeling is 1,000.”, which reads on a human mathematical algorithm or mathematical representation. No additional limitations are present.
With respect to claim(s) 9 the claim(s) recite(s) “generating, by the computing device, one or more suggested questions”, which reads on a human recommendation or suggestion can be made when contradiction shows. No additional limitations are present.
With respect to claim(s) 10, 13, and 18 the claim(s) recite(s) “storing” transcription data file in a memory. It reads on a human after receiving text from user, writing on a paper and breaking down transcript and categorize concept of text/transcription/question/messages and keeping it for later use. Storing is post solution activity. Storing, is done by using generalized computer component as per [0035] specification.
These claims further do not remedy the judicial exception being integrated into a practical application and further fail to include additional elements that are sufficient to amount to significantly more than the judicial exception.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 1, 2, 3, 4, 10, 11, 13, and 16, 18, is/are rejected under 35 U.S.C. 103 as being unpatentable over Fox et al. US 20210174016 A1 in view KLOETZER; US 20160260026 A1 and further in view of Mann et al. US 20210342785 A1
Regarding Claim 1, 11 and 16 Fox teaches:
1. A method for analyzing, by a computing device, one or more transcripts, the method comprising: receiving, by the computing device, a first transcript comprising at least one or more questions and one or more answers to the one or more questions; Fox teaches (“[0055] Question-answer documents, once received, can be parsed so the text can be readily processed. …”) (“[0079] The question-answer document can include text representing a question-answer document, which can include a series of questions and answers. For example, the question-answer document can represent a transcript of a deposition. …”) (“[0121] In step 205, a question-answer document can be received for processing. The question-answer document can contain one or more documents that include a series of questions and answers. The question-answer document can include text representing a question-answer document, which can include a series of questions and answers. The question-answer document can be in a file (e.g., .pdf, .docx, .rtf, .txt, .ocr, .csv), data structure (e.g., JSON, XML, tabular), or database (e.g., set of tables, object store), and other suitable formats as can be appreciated. In some examples, the question-answer document can be received from a user. In other examples, the question-answer document can be retrieved at runtime from the storage based on a document identifier given by a user.”) by Fox et al. US 20210174016 A1
ingesting, by the computing device, the first transcript into a first transcript data file, Fox teaches (“[0079] … The question-answer document can be in a file (e.g., .pdf, .docx, .rtf, .txt, .ocr, .csv), data structure (e.g., JSON, XML, tabular), or database (e.g., set of tables, object store), and other suitable formats as can be appreciated. In some examples, the question-answer document can be received from a user. In other examples, the question-answer document can be retrieved at runtime from the storage based on a document identifier given by a user.”) (“[0119] … In some examples, a question-answer document can be generated by the client application 180 based on input from one or more input devices (not shown) communicably coupled to the client device 110. The client application 180 can access one or more question-answer documents from the client data store 185 and transfer the one or more question-answer documents to the computing environment 105 for processing.”) by Fox et al. US 20210174016 A1
the transcript data file based at least in part on the first transcript, the computing device using natural language processing to extract transcript data from the first transcript; Fox teaches (“[0032] FIG. 28 shows an example of a question, represented as a parse tree, with each word tagged by its part-of-speech, and the transformed version that shows a chunk produced through natural language processing, according to various embodiments of the present disclosure.”) (“[0079] The question-answer document can include text representing a question-answer document, which can include a series of questions and answers. For example, the question-answer document can represent a transcript of a deposition. …”) (“[0109] In some examples, the transforming application 145 can transform question-answer groups using techniques in natural language processing (NLP). …”) (“[0110] For each such common pattern, the transforming application 145 can use NLP parsing techniques like chunking and chinking to create custom transformation rules to transform the text into a canonical form. Information from text can be extracted using chunking and chinking. These techniques can use regular expressions based on the part-of-speech (POS) tags, to create a parse tree from a given sentence. Chunking can refer to the process of extracting chunks from a sentence based on certain POS tag rules.”) (“[0243] FIG. 28 shows an example of a question, represented as a parse tree, with each word tagged by its part-of-speech, and the transformed version that shows a chunk produced through natural language processing, including chunking and chinking. Processing can begin with the question text and created a simple sentence tree 2805. Then sentence tree can be broken up into a chunked form 2810, with a chunk based on a rule of “<. *>?<PRP><.*>?.” This rule specifies that any personal pronoun that has any POS tag before and after it can be extracted as a chunk. In this case, it extracted “Were” and “able” that were before and after the pronoun word.”) by Fox et al. US 20210174016 A1
contextualizing, by the computing device, the first transcript using vector space modeling; and Fox teaches (“[0272] Another evaluation metric that can be used is sentence similarity. Sentence similarity can help to determine if sentences are semantically equivalent. A pair of sentences was converted into vector representations in the form of embeddings, and then the cosine similarity measure, with the two sentence vectors, was used to estimate the similarity between them. Suitable embeddings can be any of the conventional embeddings generated, like BERT or word2Vec. Thus, InferSent, a sentence embedding method providing vector representations of English sentences, was used.”) by Fox et al. US 20210174016 A1
Fox does not expressly teach executing, by the computing device, a contradiction detection assessment, based at least in part on the one or more questions and the one or more answers to the one or more questions, using inference modeling and anomalies detection to determine a contradiction score.
KLOETZER teaches:
executing, by the computing device, a contradiction detection assessment, based at least in part on the one or more questions and the one or more answers to the one and the vector space modeling, to determine a contradiction score. KLOETZER teaches (“[0033] … a contradictory expression presenting system 44, receiving an input of a question sentence from PC 34, detecting mutually contradicting expressions as answers to the question sentence from documents on the Web by using mutually contradicting expressions stored in contradiction pattern pair storage device 42 …”) (“[0038] FIG. 3 shows a schematic configuration of first-stage contradiction pattern pair classifying unit 80, which includes: an opposite polarity pair extracting unit 100 extracting opposite polarity pairs from candidate pattern pairs stored in candidate pattern pair storage device 60 with reference to polarity dictionary storage device 62, and storing the extracted pairs in opposite polarity pair storage device 102; and an SVM (Support Vector Machine) 104 functioning as a classifier for classifying the opposite polarity pairs stored in opposite polarity pair storage device 102 to pattern pairs considered to be mutually contradictory and pattern pairs considered to be not necessarily contradictory, and storing the former pairs in contradiction pattern pair intermediate storage device 82 and the latter pairs in non-contradiction pattern pair intermediate storage device 84. At the time of classifying the pattern pairs, SVM 104 adds, to each pattern pair, a score representing a degree of adequacy of the pattern pair to be classified as a contradiction pattern pair.”) (“[0041] FIG. 4 … a training data expanding unit 136, establishing score CDP for each additional contradiction pattern by using the subscore CDPsub of the additional contradiction pattern pairs stored in additional contradiction pattern pair storage device 132, merging a prescribed ratio of contradiction pattern pairs having higher scores CDP with the training data stored in training data storage device 108 (see FIG. 3) and thereby expanding the training data; and an expanded training data storage device 138 storing the training data output from training data expanding unit 136. …”) (“[0062] In accordance with the result of learning, SVM 104 classifies each of the candidate pattern pairs having mutually opposite polarities stored in opposite polarity pair storage device … … Here, SVM 104 gives SVM score to each of the output pattern pairs. If it is highly possible that a pattern pair is a contradiction pattern pair, the score will be high, and otherwise, the score will be low. …”) (“[0095] The program includes a sequence of instructions consisting of a plurality of instructions causing computer 540 to function as various functional units of contradiction pattern pair collecting device 40 in accordance with the embodiment above. Some of the basic functions necessary to cause computer 540 to operate in this manner may be statically linked at the time of creating the program or dynamically linked at the time of executing the program, by the operating system running on computer 540, by a third-party program, or various programming tool kits or program library (for example, a computer program library for SVM) installed in computer 540. …”) (“[0100] By way of example, an SVM is used as a classifier. …”) by KLOETZER; US 20160260026 A1
KLOETZER is considered to be analogous to the claimed invention because it relates to a device for extracting contradictory expressions from a huge amount of texts and, more specifically, to a device for extracting, with high reliability, pairs of mutually contradicting expressions from a huge amount of texts.
Therefore, it would have been obvious for someone of ordinary skill in the art before the effective filing date of the claimed invention to modify Fox, to incorporate the teachings of KLOETZER in order to detect voice recognition on a given period.
One could have been motivated to do so because system performance will be improve by obtaining contradiction pattern pair. (“ [0069] Accuracy of contradiction pattern pairs in contradiction pattern pair storage device 42 obtained in this manner was confirmed by experiments as will be described in the following, and it was confirmed that the performance was clearly improved over the prior art.”) by KLOETZER; US 20160260026 A1
The combination does not explicitly teach metric generation related to metadata.
Mann teaches generating, by the computing device, one or more sets of metrics corresponding to the first transcript, the one or more sets of metrics based at least in part on global metadata of the first transcript data fileFig. 44, transcript column, Mann teaches “Meeting Transcript” to populate cells of “Transcript” column 4312. and a conversation transcript may include an audio recording or video recording of the meeting, transcription of the audio, or the chat entries during the communication. (“[0526] … FIG. 43 illustrates interface 4300 with menu 4302 enabling a user to select various prompts to associate the communications rule with a cell (or multiple cells in a column or row) and trigger the generation of new or modified table entries characterizing workflow-related communications between workflow participants. Specifically, menu 4302 may enable a user to configure specific fields from a video communications platform (such as Zoom) for populating cells or columns of the user's board. In FIG. 43, the user has selected “Meeting Name” to populate cells of a “Name” column 4304 of the user's board (e.g., a board as illustrated in FIG. 44), “Meeting Host” to populate cells of “Host” column 4306, “Meeting Participants” to populate cells of “Participants” column 4308, “Meeting Duration” to populate cells of “Duration” column 4310, and “Meeting Transcript” to populate cells of “Transcript” column 4312. FIG. 43 also illustrated an exemplary interface for the user to select “Meeting Agenda” from a pick list to populate cells of a “Status” column 4314 of the user's board (e.g., a board as exemplified in FIG. 44).”) (“[0381] In some embodiments, an automation may include “When the status changes to ‘Done,’ do ‘something’ on ‘Date.’” For example, the system may send an email or text on the date that the status changes.” )(“[0539] … [0539] Aspects of this disclosure may involve characteristics of a communication further including at least one participant identification, start and end time stamps, a conversation transcript, a conversation duration, a list of key words spoken in the communication, or statistics related to participants. Participant identification may include any identifying information of people (such as name, image, or email address). Start and end time stamps may include start and end time indicators of a meeting (e.g., graphical or numerical or a combination thereof) or timestamps associated with someone joining meeting and leaving meeting. A conversation transcript may include an audio recording or video recording of the meeting, transcription of the audio, or the chat entries during the communication. …”) (“[0650] Automations may include predefined automation categories (e.g., static recipes) and user-defined categories (e.g., custom recipes). Predefined automation categories may be set in advance and may include, for example, commonly used categories of automations or categories expected to be highly used. Predefined automation categories may include status change, notification, recurring, item creation, and due date automations. Predefined automation categories may include certain blocks arranged in a predefined order. For example, a predefined notification automation may include a recipe of “when a column changes, notify someone.” Blocks making up such an automation may include “column” and “notify.” While a user may be able to customize the automation by, for example, modifying which column changes, and who to notify, the basic structure of the automation may be unchangeable in some embodiments. In such automation, the blocks may remain in set positions. A user-defined automation, on the other hand, may allow a user to build an automation from scratch using their own selection of blocks. User-defined automations may grant a user broad flexibility to configure automations. A user may have the ability to build an automation that performs a broad range of desired functions, using a variety of applications including external platforms, in a user-friendly intuitive interface that does not require programming knowledge.”) (“0711] … Likewise, filters such as board filter 6906 and automation filter 6908 enabled filtering by boards 6914 and automations 6916, respectively allowing a troubleshooter to fine tune the account automation activity as needed. The tool, for example, may provide administrative level information of the failures such as the date of the generation of the rule and any configuration edits to the automations. Further, the system may be configured to automatically or manually disable specific automations, in some instances, in response to the detection of a failure.”) (“[0554] Block 464: Generate an object associated with the table, the object containing the characteristics of the communication logged in memory. In some embodiments, the system may display the stored metadata associated with the video call and its participants on a table of the data management platform.”) by Mann et al. US 20210342785 A1
, the transcript data comprising the one or more sets of metrics; Mann teaches may generate new row in a table, When any meeting ends, create an item storing participant identification, start and end time stamps, conversation transcript, and conversation duration (i.e. one or more sets of metrics). (“[0524] … For example, when a communication session is scheduled or when a communication session ends, the data management platform's system may generate a new row in a table, memorializing the communication session and displaying any metadata associated with and/or stored from the communication session. …”) (“[0536] By way of one example, FIG. 45 … “When any meeting ends, create an item storing participant identification, start and end time stamps, conversation transcript, and conversation duration”; and communications rule 4512 recites, “When any meeting ends, create an item storing a list of key words spoken in the communication.” Using each of the communications rules displayed in FIG. 45, the system may pull all data (metadata or characteristics of the communication), log the data memory, and generate an object associated with the table to display the collected data from the communication. …”) by Mann et al. US 20210342785 A1
causing display, via a computing device user interface, on a first tab of the user interface the first transcript; Fig. 44, transcript column Mann teaches (“[0526] … FIG. 43, the user has selected “Meeting Name” to populate cells of a “Name” column 4304 of the user's board (e.g., a board as illustrated in FIG. 44), “Meeting Host” to populate cells of “Host” column 4306, “Meeting Participants” to populate cells of “Participants” column 4308, “Meeting Duration” to populate cells of “Duration” column 4310, and “Meeting Transcript” to populate cells of “Transcript” column 4312. FIG. 43 also illustrated an exemplary interface for the user to select “Meeting Agenda” from a pick list to populate cells of a “Status” column 4314 of the user's board (e.g., a board as exemplified in FIG. 44).”) (“[0529] Aspects of this disclosure may include presenting on a display at least one active link for enabling workflow participants to join in a video or an audio communication. … … Presenting at least one active link on a display may include presenting the link as a graphic (e.g., an icon that may be static or animated), as text (e.g., a URL), or any combination thereof. … for communication between people in real time (e.g., a phone call via Zoom, Teams, or WebEx). A video communication may include any transmission of data using technology for the reception and transmission of audio-video signals by users in different locations, for communication between people in real time (e.g., a video call via Zoom, Teams, or WebEx).”) (“[0705] After the query is received some disclosed embodiments may access the activity log to identify at least one most recent action performed on the table and present at least one specific logical sentence structure underlying at least one logical rule that caused the at least one most recent action. … … Should the automation fail (e.g., because there is no email address to send the message or any other error that may occur), the last action recorded may be an indication of a failure to send the email. This result and recorded last action may be presented on a graphical user interface or any other way preferred by the user. The presentation may include causing the at least one logical sentence structure to appear on a display, such as on a screen, client device, projector, or any other device that may present the at least one logical sentence structure, as previously disclosed.”) by Mann et al. US 20210342785 A1
receiving, via the computing device user interface, a user selection to generate a contradiction report based on the contradiction detection assessment; FIG. 69 Mann teaches different tabs, description time stamp of the activity log, and error handling. (“[0646] … graphical user interface 5420 may display temporary text such as “when this happens” in a region of condition 5402 and “do something” may be displayed in a region of action 5422. The temporary text may guide a user to build the automation without needing programming knowledge.”) (“[0708] In FIG. 69 for example, an automation under the automation heading 6916 may receive updates to change the variables, including the conditions (e.g., “When Date arrives” and”) (“[0709] “When status changes to done”) and actions (e.g., “send an email to Ann Smith” and “notify Joe”). …”) (“[0710] … For example, a visual cue may include a pop-up message, a presentation of a graphical symbol that indicates a warning, an animation such as a flashing indication, or any other indicator displayed on a client device. In such an event, the system may proceed with identifying a particular logical sentence structure likely to be associated with the irregularity and displaying the particular logical sentence structure. The particular logical sentence structure may refer to a particular automation that contains an irregularity. For example, in the event of timeout occurring while sending an email, the particular logical sentence structure causing the timeout may be in communication with an email server but might not be able to fully transmit the email due to an error such as an incorrect email address, the lack of an email address, or any other irregularity. Because of this irregularity, the system may display this particular automation consistent with the earlier disclosure. The display of the particular logical sentence structure may also include a display of a variable recently changed in the particular logical sentence structure consistent with earlier disclosure. …”) by Mann et al. US 20210342785 A1
determining, by the computing device, a contradiction in the one or more questions and one or more answers, the contradiction comprising a first answer of the one or more answers contradicting a second answer of the one or more answers; and FIG. 69 -72 Mann teaches different tabs, description time stamp of the activity log, and error handling. (“[0708] In FIG. 69 for example, an automation under the automation heading 6916 may receive updates to change the variables, including the conditions (e.g., “When Date arrives” and”) (“[0709] “When status changes to done”) and actions (e.g., “send an email to Ann Smith” and “notify Joe”). Each of the changes indicated by change entries 6918, 6920, and 6922 may include a time stamp under Date and Time heading 6910 to reflect when the change was made. Each of the change entries may reflect the changes that were made under the automation heading 6916 so that a user may follow each of the updates made to the automation in sequential order and determine which change may have caused an error in the normal operation of the automation. While FIG. 69 illustrates a filter for all changes made, a user may also filter the changes based on a date and time using the Date and Time filter 6902 to view the changes made in the last few minutes, last hour, last day, last week, last month, last year, or any other time period.”) (“[0710] … For example, in the event of timeout occurring while sending an email, the particular logical sentence structure causing the timeout may be in communication with an email server but might not be able to fully transmit the email due to an error such as an incorrect email address, the lack of an email address, or any other irregularity. Because of this irregularity, the system may display this particular automation consistent with the earlier disclosure. The display of the particular logical sentence structure may also include a display of a variable recently changed in the particular logical sentence structure consistent with earlier disclosure. For example, a user or entity may modify an automation to send an email to a new email address. In response to this modification, the system may display the new email address as the variable recently changed, so that the user may identify a recent change that may have caused an irregularity.”) by Mann et al. US 20210342785 A1
causing display on a second tab of the computing device user interface the contradiction report for the first transcript, the second tab being accessible based on the presence of the contradiction, the contradiction report comprising the first answer, the second answer, and the contradiction score. Mann teaches different tabs, description time stamp of the activity log, and error handling, and , the system may rate or score the severity of the failures. (“[0711] Similarly, as described earlier with relation to block 6812 in FIG. 68, presenting may include at least one logical sentence structure to appear on a display as also shown on the exemplary FIG. 69 through FIG. 72. FIG. 69 illustrates an exemplary representation of a collapsed account activity viewing interface 6900 of a system for troubleshooting faulty automations in tablature. … … For example, if a user troubleshooting the automation only would like to check on failed activities, the user may utilize view 6900 to view by a “failed” status and reconfigure those particular automations. As depicted, “Success” status 6922 corresponds to a configured automation that did not encounter any issues and performed as expected; “Pending” status 6918 corresponds to an automation currently processing that may be monitored by the user in a real-time; “Failed” Status 6920 corresponds to an automation that did not perform as expected and may display a reason for failure as depicted, and a button (or any other interactive element) 6924 to assist in resolving the issue. The “Failed” status may be an example of an indication of an irregularity. … … The system may also include mapping of different reasons for failures associated with automations and integrations. In some instances, the system may rate or score the severity of the failures, which may be included in a notification to a user or administrator to communicate the failure and/or to provide information needed to correct the failure.” )(“[0005] … Such a tool may manage various automation tasks, occurring irregularities, and other aspects of an automation.” (“[0006] It may be helpful to provide a user with information regarding one or more automations associated with one or more boards. Then, when an irregularity in an automation occurs in a board, one or more of the most recently changed automations may be displayed so that a user can quickly identify the source of the problem. Such information may include for example, an overview on how long tasks will take to complete, warnings, historical information, and the like. Further, the troubleshooting tool may include display features that provide different informational displays that allow a user to interact with the information in real time in an organized manner.”) (“[0701] … activating a graphical user interface (GUI) …”) by Mann et al. US 20210342785 A1
Mann is considered to be analogous to generally to systems, methods, and computer-readable media for enabling and optimizing workflows in collaborative work systems.
Therefore, it would have been obvious for someone of ordinary skill in the art before the effective filing date of the claimed invention to modify Fox, and KLOETZER to incorporate the teachings of Mann in order to detect voice recognition on a given period.
One could have been motivated to do so because software system will be efficient system. (“[0004] ... It would be useful to improve these software applications to increase operation management efficiency and overall efficiency of computer systems.”) by Mann et al. US 20210342785 A1
Claim 11 is a system claim with a limitation similar to the limitation of method Claim 1 and is rejected under similar rationale.
Claim 16 is a non-transitory computer-readable claim with a limitation similar to the limitation of method Claim 1 and is rejected under similar rationale. Additionally,
Fox teaches:
16. A non-transitory computer-readable storage medium having computer-executable instructions stored thereon for analyzing one or more transcripts, wherein executing the computer-executable instructions on a computing device causes the computing device to: Fox teaches (“[0068] Among embodiments, some aspects of the present disclosure are implemented by a computer program executed by one or more processors, as described and illustrated. As would be apparent to one having ordinary skill in the art, one or more embodiments may be implemented, at least in part, by computer-readable instructions in various forms, and the present disclosure is not intended to be limiting to a particular set or sequence of instructions executed by the processor.”) (“claim 8. A system for transforming question-answer groups into declarative segments, comprising: a memory device to store computer-readable instructions thereon; and at least one computing device configured through execution of the computer-readable instructions to: …”)
Regarding Claim 2 the combination teaches claim 1 as identified above.
Fox teaches:
2. The method of claim 1, further comprising storing, but the computing device, a first transcript data file in a transcript database. Fox teaches (“[0079] The question-answer document can include text representing a question-answer document, which can include a series of questions and answers. For example, the question-answer document can represent a transcript of a deposition. While the term “question-answer document” is used here to describe the data processed by the parsing application 130, the data being processed can be in a file (e.g., CSV), data structure (e.g., JSON or tabular), or database (e.g., set of tables, object store), so these terms are used interchangeably in the present disclosure, as can be appreciated.”) (“[0156] In step 930, the anonymizing application 135 can generate a mapping between the original content of detected entity type and its anonymized representation as a key-value pair. The mapping can be stored in database data 155.”) by Fox et al. US 20210174016 A1
Regarding Claim 3 the combination teaches claim 1 as identified above.
Fox further teaches:
3. The method of claim 1, wherein the contradiction detection assessment comprises a dynamic learning model. Fox teaches (“[0043] … The decoder is a canonical RNN-decoder, but with distinct differences in prediction, updating of state, and reading. There are two modes—generate and copy—and scores are calculated for each of them. …”) by Fox et al. US 20210174016 A1
KLOETZER further teaches
KLOETZER teaches (“[0033] … a contradictory expression presenting system 44, receiving an input of a question sentence from PC 34, detecting mutually contradicting expressions as answers to the question sentence from documents on the Web by using mutually contradicting expressions stored in contradiction pattern pair storage device 42 …”) (“[0041] FIG. 4 … a training data expanding unit 136, establishing score CDP for each additional contradiction pattern by using the subscore CDPsub of the additional contradiction pattern pairs stored in additional contradiction pattern pair storage device 132, merging a prescribed ratio of contradiction pattern pairs having higher scores CDP with the training data stored in training data storage device 108 (see FIG. 3) and thereby expanding the training data; and an expanded training data storage device 138 storing the training data output from training data expanding unit 136. …”) (“[0062] In accordance with the result of learning, SVM 104 classifies each of the candidate pattern pairs having mutually opposite polarities stored in opposite polarity pair storage device … … Here, SVM 104 gives SVM score to each of the output pattern pairs. If it is highly possible that a pattern pair is a contradiction pattern pair, the score will be high, and otherwise, the score will be low. …”) (“[0095] The program includes a sequence of instructions consisting of a plurality of instructions causing computer 540 to function as various functional units of contradiction pattern pair collecting device 40 in accordance with the embodiment above. Some of the basic functions necessary to cause computer 540 to operate in this manner may be statically linked at the time of creating the program or dynamically linked at the time of executing the program, by the operating system running on computer 540, by a third-party program, or various programming tool kits or program library (for example, a computer program library for SVM) installed in computer 540. …”) (“[0100] By way of example, an SVM is used as a classifier. …”) by KLOETZER; US 20160260026 A1
KLOETZER is considered to be analogous to the claimed invention because it relates to a device for extracting contradictory expressions from a huge amount of texts and, more specifically, to a device for extracting, with high reliability, pairs of mutually contradicting expressions from a huge amount of texts.
Therefore, it would have been obvious for someone of ordinary skill in the art before the effective filing date of the claimed invention to modify Fox, KLOETZER and Mann to incorporate the teachings of KLOETZER in order to detect voice recognition on a given period.
One could have been motivated to do so because system performance will be improve by obtaining contradiction pattern pair. (“ [0069] Accuracy of contradiction pattern pairs in contradiction pattern pair storage device 42 obtained in this manner was confirmed by experiments as will be described in the following, and it was confirmed that the performance was clearly improved over the prior art.”) by KLOETZER; US 20160260026 A1
Regarding Claim 4 the combination teaches claim 1 as identified above.
Fox teaches:
4. The method of claim 1, wherein receiving a first transcript further comprises receiving a plurality of transcripts. Fox teaches (“[0055] Question-answer documents, once received, can be parsed so the text can be readily processed. …”) by Fox et al. US 20210174016 A1
Regarding Claim 10 the combination teaches claim 1 as identified above.
Fox teaches:
10. The method of claim 1, wherein ingesting the first transcript further comprises parsing the first transcript to identify metadata and storing the metadata in a database associated with the transcript data file. Fox teaches (“[0126] FIG. 3 shows an example of portions of a deposition transcript 300. While the example of FIG. 3 shows portions of a deposition, the concepts described herein can also apply to any suitable question-answer document. The deposition transcript 300 includes some front-matter, such as a cover, details of the court reporter, lists of people present, date-time details, and location, although other meta-information can be included in addition to or instead of this information. Likewise, deposition transcript 300 can also include a header 305 or footer 310 that can include information like the name of the person being deposed, name of the attorney, name of the client or party, name of the law firm, e-mail IDs, phone numbers, page numbers, information of a transcription service, or other information as can be appreciated.”) (“[0132] Generally, the PDF versions of legal depositions have multiple columns per page. Apache Tika—a cross-platform tool developed by the Apache Software Foundation that can be used to extract document metadata, along with content, over a multitude of file formats, using a single programming interface—can read multiple columns in a page separately by recognizing column separations which are encoded as extended ASCII codes. Hence, text from separate columns can be parsed in the correct sequence.” (“[0134] The text contained in the examination segment 315 of the deposition transcript 300 transcript can therefore be parsed line-by-line to extract questions and answers and discard any other extraneous data. In some examples, Apache Tika can be used to parse the text from the examination segment 315. In some examples, regular expressions (regex) can be used to search for a pattern within each line of the text. Each line can be converted to a string which contains only alphabetics, periods, and question marks. Then, a dictionary can be used to store all the patterns and the list of indices of the lines in which those patterns had appeared. Finally, checks can be made for patterns satisfying one or more separation constraints, and lines including patterns meeting the one or more separation constraints can be removed. For example, lines can be removed from the text parsed from the examination segment 315 if those lines do not begin with the answer or question tags (‘A.’ and ‘Q.’) and do not end with a question mark. As another example, lines that include particular patterns can be removed from the text parsed from the examination segment 315 if those lines were removed when the number of times these patterns appear is greater than or equal to the number of pages of the deposition transcript 300.”) Fox
Claim 13 is a system claim with a limitation similar to the limitation of method Claim 10 and is rejected under similar rationale.
Claim 18 is a non-transitory computer-readable claim with a limitation similar to the limitation of method Claim 10 and is rejected under similar rationale.
Claim 5, 9, 12, and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Fox et al. KLOETZER; and Mann; et al. and further in view of Jones et al. US 20080114721 A1
Regarding Claim 5 the combination teaches claim 1 as identified above.
Jones teaches:
5. The method of claim 1, further comprising generating, by the computing device and in response to a query, a semantic search result, wherein the query comprises a query question, and Jones teaches (“[0017] In some embodiments, methods and systems are provided using Modular Optimized Dynamic Sets ("MODS"). MODS may be used to generate, from a first query, one or more related or suggested queries, such as search queries. In some embodiments, substitutables are used with MODS, but methods other than the use of substitutables may also be used with MODS.”) (“[0020] Furthermore, substitutables can be used with MODS, but can have other uses as well. For example, substitutables can be used in generating related phrases in documents, in question answering such as by generating related questions or related answers, in decomposing phrases, indexing for web searching, retrieval algorithms for web searching, etc.”) by Jones et al. US 20080114721 A1
Wherein the semantic search result, comprises a set of one or more questions that are similar to a query question. Jones teaches (“[0016] Thus, there is a need for systems and methods that provide searches or suggested searches of search terms that are similar or related in meaning to the search terms that a user provides to a search engine. There is also a need for a system and method for searching unbidded search terms in a sponsored search systems that are similar or related or related in meaning to those that a user provides.”) by Jones et al. US 20080114721 A1
Jones is considered to be analogous to the claimed invention because it relates to for generating one or more related queries with respect to a given query.
Therefore, it would have been obvious for someone of ordinary skill in the art before the effective filing date of the claimed invention to modify Fox, KLOETZER and Mann to incorporate the teachings of Jones in order to include search engine feature.
One could have been motivated to do so because system will have intelligent search engine because it can maximize searching similar term. (“[0013] … a search engine user is at a disadvantage in the absence of intelligent searchi