Prosecution Insights
Last updated: April 19, 2026
Application No. 18/830,502

Systems and Methods for Creation and Application of Interaction Analytics

Non-Final OA §101§102§103§112
Filed
Sep 10, 2024
Examiner
BOOK, PHYLLIS A
Art Unit
2454
Tech Center
2400 — Computer Networks
Assignee
Read AI Inc.
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
2y 3m
To Grant
97%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
390 granted / 473 resolved
+24.5% vs TC avg
Moderate +14% lift
Without
With
+14.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 3m
Avg Prosecution
10 currently pending
Career history
483
Total Applications
across all art units

Statute-Specific Performance

§101
10.5%
-29.5% vs TC avg
§103
47.5%
+7.5% vs TC avg
§102
10.1%
-29.9% vs TC avg
§112
22.7%
-17.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 473 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on October 7, 2020 was filed before the mailing of a first Office Action on the merits. Since the submission complies with the provisions of 37 CFR 1.97, the IDS has been considered by the Examiner. Claim Objections Claims 1-20 are objected to because of the following informalities: The claims contain incorrect margins. Pursuant to 37 CFR 1.75(i), the claims should be indented as follows: (i) Where a claim sets forth a plurality of elements or steps, each element or step of the claim should be separated by a line indentation. However, the claims as currently recited have the first word of each limitation on the left margin, using a “hanging indentation,” with the continuation lines following the first word of the limitation indented. This is the opposite type of indentation from what is required by 37 CFR 1.75(i), which asserts that a “line indentation,” not a “hanging indentation” is required. To be compliant with this regulation Applicant must use standard indentation practices, in which the first word of each limitation must be indented with the subsequent lines on the left margin, and NOT the first word on the left margin with subsequent lines indented. For further information regarding hanging indentations, please refer to: https://libguides.ccsu.edu/c.php?g=736245&p=6687403, which states that “[h]anging indents are used in the works cited or bibliography of MLA, APA, Chicago, and various other citation styles.” Hanging indentations are used for the claims of published patents, but not for claim recitations during prosecution. Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 8, and 15 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) and the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception for the reasons set forth below. Claims 1, 8, and 15 are independent, and Claim 1 recitation is narrower than that of Claims 8 and 15, although the overall process arrives at the same conclusion. Claim 1 recites as follows: 1. A method for determining highlights of interactions, the method comprising: receiving a first portion of an interaction; receiving a second portion of the interaction; receiving a first significance metric value for the first portion of the interaction; receiving a second significance metric value for the second portion of the interaction; calculating, using the first significance metric value, whether the first portion of the interaction is a highlight; calculating, using the second significance metric value, whether the second portion of the interaction is a highlight; responsive to the first portion of the interaction and the second portion of the interaction each calculated to be a highlight, combining the first portion of the interaction with the second portion of the interaction into a condensed version of the interaction; and presenting the condensed version of the interaction in at least one of: a transcript, an audio format, a video format, or an audiovisual format. Claims 8 and 15 recite the same process steps, but Claim 1 is directed to non-transitory computer-readable storage and Claim 15 to a system. Claim 15 recites as follows: 15. A system for determining highlights of an interaction, the system comprising: at least one hardware processor; and at least one non-transitory memory storing instructions, which, when executed by the at least one hardware processor, cause the system to: receive one or more portions of the interaction; for each particular portion of the one or more portions of the interaction: receive a significance metric value of the particular portion of the interaction, and calculate, using the received significance metric value, whether the particular portion of the interaction is a highlight; responsive to at least one portion of the interaction calculated to be a highlight, aggregating the at least one portion of the interaction calculated to be a highlight into a condensed version of the interaction; and present the condensed version of the interaction in at least one of: a transcript, an audio format, a video format, or an audiovisual format. The Background section of the specification states as follows: In a world where work environments are becoming increasingly remote, work meetings, the backbone of many employees' days, are relying more and more on video and audio interactions via video and audio feeds such as (but not limited to) video conferencing. Video conferencing, however, presents problems in participant engagement, participant morale, and employee productivity, and such problems can adversely affect the quality of not only an existing meeting, but of future meetings as well. Specification, paragraph [0002], emphasis added. Multiple reasons exist for these problems. First, users participating on video monitors find it difficult to understand the more subtle forms of feedback they receive via a computer monitor, and so they cannot tell the extent to which a meeting is proceeding effectively. … In addition, a majority of meetings now have more than seven attendees, and so the ability to review and analyze each participant manually is extremely limited, even if the participant is adept at picking up on and understanding social cues through a video monitor. Specification, paragraph [0003], emphasis added. The highlighted portion above provides some understanding of the purpose of the invention. The Summary and Detailed Description sections of the specification provides further insight into the workings of the invention. Embodiments of the present invention involve systems and methods of improving or adjusting interactions, in real time or in the future, using analytics gleaned from an interaction. … A reaction metric is calculated based on the received audiovisual score for a portion of the interaction, and is displayed proximate to the relevant portion of the interaction record. The metric and the display can be configured to be used in decision making related to an interaction. Specification, paragraph [0006], emphasis added. Embodiments of the invention include generation of meeting-relevant interaction models based on a fusion of sensing inputs, including both video and audio inputs. More specifically, embodiments of the invention can include receiving audiovisual data from an interaction, and sending those received inputs to be analyzed according to various models for some or all of a variety of factors, including (but not limited to) face sentiment, laughing, text sentiment, face orientation, face movement, video on/off status, and talking status. Once those models operate on the received data, each factor is given a score by the model. The scores are then aggregated or combined to provide a sentiment score or an engagement score that pertains to a participant (or participants), or that pertains to the meeting itself. Once the scores are combined, in an embodiment, metrics, alerts or recommendations can be fed back to a user, who may be a participant, or who may not be a participant in the interaction. Specification, paragraph [0017], emphasis added. [T]he received portions of the interaction are processed to determine if they are a highlight of the meeting. For the purposes of the present invention, a highlight is a moment within an interaction that stands out based on features built into some combination of a meeting score, sentiment, and/or an engagement metric. The overall purpose of the invention appears to be, based on participants physical reactions captured by “receiving audiovisual data from an interaction,” providing a metric, alert, or recommendation to a person who may be a meeting participant and determining whether interactions may constitute “highlights” in the meeting. Under the 2019 Revised Guidance1, it is necessary to first look to whether the claim recites: (1) any judicial exceptions, including certain groupings of abstract ideas (i.e., mathematical concepts, certain methods of organizing human activity such as a fundamental economic practice, or mental processes); and (2) additional elements that integrate the judicial exception into a practical application (see Manual for Patent Examining Procedure ("MPEP") §§ 2106.05(a)-(c), (e)-(h)). Only if a claim (1) recites a judicial exception and (2) does not integrate that exception into a practical application, then it is necessary look to whether the claim: (3) adds a specific limitation beyond the judicial exception that are not "well-understood, routine, conventional" in the field (see MPEP § 2106.05(d)); or (4) simply appends well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception. Prong One - Abstract Idea The Revised Guidance extracts and synthesizes key concepts identified by the courts as abstract ideas to explain that the abstract idea exception includes the following groupings of subject matter: (a) Mathematical concepts-mathematical relationships, mathematical formulas or equations, mathematical calculations; (b) Certain methods of organizing human activity – fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions); and (c) Mental processes-concepts performed in the human mind (including an observation, evaluation, judgment, opinion). Under the Revised Guidance, if the claim does not recite a judicial exception (a law of nature, natural phenomenon, or subject matter within the enumerated groupings of abstract ideas above), then the claim is patent-eligible at Prong One. However, if the claim recites a judicial exception (i.e., an abstract idea enumerated above, a law of nature, or a natural phenomenon), the claim requires further analysis for a practical application of the judicial exception in Step 2A. Prong Two, Step 2A - Practical Application If a claim recites a judicial exception in Step 2A, a determination is made whether the recited judicial exception is integrated into a practical application of that exception by: (a) identifying whether there are any additional elements recited in the claim beyond the judicial exception(s); and (b) evaluating those additional elements individually and in combination to determine whether they integrate the exception into a practical application. The seven identified "practical application" sections of the MPEP are cited in the Revised Guidance under Step 2A. The first four constitute “practical applications,” as follows: (1) MPEP § 2106.05(a) Improvements to the Functioning of a Computer or to Any Other Technology or Technical Field (2) MPEP § 2106.05(b) Particular Machine (3) MPEP § 2106.05(c) Particular Transformation (4) MPEP § 2106.05(e) Other Meaningful Limitations. The last three do not constitute “practical applications,” as follows: (5) MPEP § 2106.05(f) Mere Instructions to Apply an Exception (6) MPEP § 2106.05(g) Insignificant Extra-Solution Activity (7) MPEP § 2106.05(h) Field of Use and Technological Environment If the recited judicial exception is integrated into a practical application as determined under one or more of the MPEP sections cited above, then the claim is not directed to the judicial exception, and the patent-eligibility inquiry ends. If not, then analysis proceeds to Step 2B. Prong Two, Step 2B - "Inventive Concept" or "Significantly More" The Federal Circuit has held that a claim that recites a judicial exception under Step 2A is nonetheless patent eligible at the second step of the Alice/Mayo test (USPTO Step 2B). This can occur if the claim recites additional elements that render the claim patent eligible by providing "significantly more" than the recited judicial exception, such as because the additional elements were unconventional in combination. Therefore, if a claim has been determined to be directed to a judicial exception under Revised Step 2A, the additional elements must be evaluated individually and in combination under Step 2B to determine whether they provide an inventive concept (i.e., whether the additional elements amount to significantly more than the exception itself). Under the Revised Guidance, it must be determined in Step 2B whether an additional element or combination of elements: (1) "Adds a specific limitation or combination of limitations that are not well-understood, routine, conventional activity in the field, which is indicative that an inventive concept may be present;" or (2) "simply appends well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, which is indicative that an inventive concept may not be present." See Revised Guidance, III.B. If the Examiner determines under Step 2B that the element (or combination of elements) amounts to significantly more than the exception itself, the claim is eligible, thereby concluding the eligibility analysis. However, if a determination is made that the element and combination of elements does not amount to significantly more than the exception itself, the claim is ineligible under Step 2B, and the claim is rejected for lack of subject matter eligibility. Analysis In accordance with Prong One of the Revised Guidance, the steps recited in independent Claims 1, 8, and 15, all of which recite analogous subject matter, are directed to a judicial exception. The process recited in the independent claims describes Certain Methods of Organizing Human Activity, which includes managing personal behavior or interactions between people. Specifically, all three independent claims recite monitoring interactions that occur in a meeting to determine whether they constitute a “highlight” and then sending a condensed version of the interaction evaluated as a “highlight” to a user as a metric, alert, or recommendation (paragraph [0017]). The invention receives audiovisual data from an interaction, and sending those received inputs to be analyzed according to various models for some or all of a variety of factors, including (but not limited to) face sentiment, laughing, text sentiment, face orientation, face movement, video on/off status, and talking status. (paragraph [0017]). Unfortunately, the determination of whether the interactions involve “highlight” is somewhat problematic, since the term is only marginally defined as “a moment within an interaction that stands out based on features built into some combination of a meeting score, sentiment, and/or an engagement metric,” which are characterized as “a moment within an interaction that stands out based on features built into some combination of pivotal and/or interesting moments that may correspond to discrete events, such as the start of a presentation or questions being asked, or they can be based on extreme levels and/or changes in affective metrics, such as high levels of agreement or disagreement or a sharp decrease in sentiment (paragraph [0066]). People’s agreement, disagreement, and sentiment are all human reactions to the interactions occurring in the meeting. Overall, the purpose of the invention relates to “Organizing Human Activity,” for the purpose of “managing personal behavior or interactions between people.” Other than the recitations of processors, memory, non-transitory computer-readable medium in Claims 8 and 15, nothing in the claim elements precludes the steps from being classified as Certain Methods of Organizing Human Activity. Accordingly, under Prong One, the independent claims recite an abstract idea. As for the dependent claims, they also recite the same abstract ideas. Claim 2 builds on the Claim 1 subject matter by adding a “third portion” of the human interactions. Claim 3 and Claim 13 recite a “significance metric value” based on human sentiment and engagement. Claim 4 builds on the Claims 1 and 3 “significance metric value” subject matter, adding a threshold value to arrive at a highlight. Claim 5 further defines the Claims 1 and 3 “significance metric value” as based on several possible features, which include the human features of participant engagement and sentiment, as well as facial expressions and voice tone. Claim 6 further defines the Claims 1 and 3 “significance metric value” as being determined by machine learning models and adds the term “affective metric” based on levels of agreement, disagreement, and sentiment, all of which are human reactions. Claim 7 further defines the Claims 1 and 3 “significance metric value” with respect to the beginning or end of the event withing the human interaction. Claim 9 defines the “significance metric value” based on video and audio feeds of a human interaction. Claim 10 compares the “significance metric value” to historical values to determine trends or patterns of human participant engagement or sentiment. Claim 11 recites weights for the “significance metric values” to calculate the “highlights.” Claim 12 recites dividing the human interaction based on various factors, which include sentiment, engagement, and reaction of the participants. Claim 13 assigns the “significance metric values” to each portion of an interaction using sentiment and engagement scores. Claim 14 divides the human interactions into user-defined portions. Claim 16 performs a longitudinal analysis (a mathematical concept, which is also an abstract idea) by comparing current and historical “significance metric values.” Claim 17 performs an enterprise analysis (another mathematical concept) by comparing “significance metric values” from human interactions. Claim 18 displays “significance metric values” of interactions. Claim 19 recites that some “significance metric values” of portions of interactions are greater than others. Claim 20 recites using “significance metric values” to generate metrics, alerts or recommendations. Prong Two Step 2A After determining under Prong One that the claims recite a judicial exception, under Prong Two, Step 2A, the analysis is conducted to determine whether the judicial exception is integrated into a practical application. There are only four categories of practical applications: 1) Improvements to the Functioning of a Computer or to Any Other Technology or Technical Field, which is not applicable to this invention. 2) Particular Machine, which is also not applicable, since standard computing systems are used. 3) Particular Transformation, but there are no transformations disclosed. 4) Other Meaningful Limitations. Since standard computing methodologies are used, including audio and video for capturing “interactions,” there do not appear to be any “meaningful limitations” associated with the invention. Based on the analysis under Prong 2, Step 2A of the Revised Guidance, Claim 1, and similarly recited Claims 8 and 15, do not recite a “practical application” to overcome the judicial exception. Prong Two Step 2B Next, if a claim has been determined to be directed to a judicial exception under the Revised Guidance, the additional elements must be evaluated individually and in combination under Step 2B to determine whether they provide an inventive concept by amounting to “significantly more” than the judicial exception itself. It must be determined in Step 2B whether an additional element or combination of elements: (1) “Adds a specific limitation or combination of limitations that are not well-understood, routine, conventional activity in the field, which is indicative that an inventive concept may be present;” or (2) “simply appends well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, which is indicative that an inventive concept may not be present.” See Revised Guidance, III.B. In the instant application, the claims only recite generic computing elements: processors, memory, and non-transitory computer-readable medium. The components are recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using generic computer components such as memory and processors, all of which are “well-understood, routine, conventional activities.” Accordingly, the analysis under the multiple steps of the Revised Guidance leads to the determination that independent Claims 1, 8, and 15 are directed to an abstract idea under 35 U.S.C. 101, and so are the dependent claims, which incorporate the abstract idea by virtue of their dependencies. Therefore, Claims 1-20 are directed to a judicial exception under the category of “Organizing Human Activity,” for the purpose of “managing personal behavior or interactions between people.” The claims are not patent-eligible. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1, 8, and 15 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding Claims 1, 8, and 15, Claims 1, 8, and 15 all recite using calculations to determine whether an interaction constitutes a “highlight.” However, the term “highlight” is not clearly defined in the specification, which discloses as follows: For the purposes of the present invention, a highlight is a moment within an interaction that stands out based on features built into some combination of a meeting score, sentiment, and/or an engagement metric. Interactions can be measured from at least one of the following: a transcript, an audio recording, a video recording, or an audiovisual recording. One skilled in the art will understand that highlights can be pivotal and/or interesting moments that may correspond to discrete events, such as the start of a presentation or questions being asked, or they can be based on extreme levels and/or changes in affective metrics (e.g., high levels of agreement or disagreement or a sharp decrease in sentiment). Specification, paragraph [0066], emphasis added. First, terms such as engagement, agreement, disagreement, and sentiment are highly subjective. For example, one person who has a more subdued personality may show engagement or agreement with a fairly blank expression, while another more outgoing person may smile broadly or nod. The term “sentiment” itself is indefinite in the context of the invention, since it is defined as “a thought, opinion, or idea based on a feeling about a situation, or a way of thinking about something” (dictionary.cambridge.org/dictionary/english/sentiment). Thus, how would it be possible to determine what any person is thinking or feeling based on an audiovisual recording? Secondly, the definition of “highlight” is based on features including sentiment and engagement metrics, which are defined as follows: An audiovisual score is received for a relevant portion of the interaction, the received audiovisual score being based on data received from at least a subset of participants in the interaction. A reaction metric is calculated based on the received audiovisual score for a portion of the interaction, and is displayed proximate to the relevant portion of the interaction record. The metric and the display can be configured to be used in decision making related to an interaction. Specification, paragraph [0006], emphasis added. However, the “audiovisual score” is not defined in a manner that would be well-understood by a person of ordinary skill in the art (POSITA). The specification discloses as follows: The audiovisual score embodies the audio and/or video results calculated as described in this document, and is based on data received from at least a subset of participants in the interaction during the interaction. In an embodiment, the data can be data that has been received from a speaker in the relevant portion of the interaction. In an embodiment, the data can be data received from at least a subset of listeners in the relevant portion of the interaction. In an embodiment, the data can be data received from a combination of the speaker along with at least a subset of listeners in the relevant portion of the interaction. In an embodiment, the data can include data that has been received from previous interactions. Specification, paragraph [0059], emphasis added. A POSITA would wonder exactly how something a speaker has said can be assigned a score, and what the score values might be. Claims 1, 8, and 15 also recite the term “significance metric value.” For something to be significant according to a dictionary definition, it must be “important or noticeable” (dictionary.cambridge.org/dictionary/english/significant). But what is important or noticeable to one person may not be to another, thereby making the term “significance metric value” indefinite as well. In short, the basis of the three independent Claims 1, 8, and 15 revolves around the calculation of a “highlight,” and “significance metric value,” which are both indefinite terms. Regarding Claims 2-7, 9-14, and 16-20, Because the claims depend from rejected base claims, they are also rejected. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-3, 5-6, 8-9, 15, and 20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Litvin (US 2021/0271864 A1, hereinafter referred to as Litvin). Regarding Claim 1, Litvin teaches: “receiving a first portion of an interaction” and “receiving a second portion of the interaction” (paragraph [0025]). [The system includes: collecting media data from active participants during a digital interaction ([0025]).] (NOTE: Since the system is collecting all the media data, it includes the “first portion of an interaction” and the “second portion of the interaction.”) “receiving a first significance metric value for the first portion of the interaction” and “receiving a second significance metric value for the second portion of the interaction” (paragraph [0035]). [The system extracts semantically significant portions of a conversation between two participants, and the portions are identified, highlighted, and/or characterized; feedback from the process are generated and incorporated into specific digital tools ([0035]).] “calculating, using the first significance metric value, whether the first portion of the interaction is a highlight” and “calculating, using the second significance metric value, whether the second portion of the interaction is a highlight” (paragraph [0035]). [The system highlights semantically significant portions of a conversation between participants and generates a perception heat map to identify and mark interaction fragments corresponding to detected sentiments ([0035]).] (NOTE: The generation of the perception heat map during the highlighting process is equivalent to “calculating whether the portion of the interaction is a highlight.”) “responsive to the first portion of the interaction and the second portion of the interaction each calculated to be a highlight, combining the first portion of the interaction with the second portion of the interaction into a condensed version of the interaction” (paragraphs [0026], [0129]). [The system and method include collecting media data from active participants during a digital interaction; extracting communication metrics, such as verbal and non-verbal communication cues of the participants and/or other communication content of an interaction, such as a transcript or interaction artifacts; processing the communication metrics; and building a sentiment perception product based on a combination of processed communication metrics ([0026]). Mapping of a sentiment perception product having sentiment/expression characterizations to semantic interaction fragments includes highlighting, marking, or otherwise augmenting time ranges of an interaction; this can be used to generate a unique representation of digital interactions ([0129]).] (NOTE: The sentiment perception product, which includes highlighting is equivalent to “combining the first portion of the interaction with the second portion of the interaction into a condensed version of the interaction.”) “presenting the condensed version of the interaction in at least one of: a transcript, an audio format, a video format, or an audiovisual format” (paragraph [0130]). [The sentiment perception heat map of the interaction, which is segmented by relevant contextual/semantic interaction fragments, could be generated and used as a report on the interaction, used as feedback for one or multiple participants ([0130]).] (NOTE: The report on the interaction is equivalent to “a transcript.”) Regarding Claims 8 and 15, Litvin teaches: “receive one or more portions of the interaction” (paragraph [0025]). [The system includes: collecting media data from active participants during a digital interaction ([0025]).] (NOTE: The collected media data is equivalent to “portions of the interaction.”) “for each particular portion of the one or more portions of the interaction: receive a significance metric value of the particular portion of the interaction” (paragraph [0035]). [The system extracts semantically significant portions of a conversation between two participants, and the portions are identified, highlighted, and/or characterized; feedback from the process are generated and incorporated into specific digital tools ([0035]).] “calculate, using the received significance metric value, whether the particular portion of the interaction is a highlight” (paragraph [0035]). [The system highlights semantically significant portions of a conversation between participants and generates a perception heat map to identify and mark interaction fragments corresponding to detected sentiments ([0035]).] (NOTE: The generation of the perception heat map during the highlighting process is equivalent to “calculating whether the portion of the interaction is a highlight.”) “responsive to at least two portions of the interaction each calculated to be a highlight, combine the at least two portions of the interaction into a condensed version of the interaction” (paragraphs [0026], [0129]). [The system and method include collecting media data from active participants during a digital interaction; extracting communication metrics, such as verbal and non-verbal communication cues of the participants and/or other communication content of an interaction, such as a transcript or interaction artifacts; processing the communication metrics; and building a sentiment perception product based on a combination of processed communication metrics ([0026]). Mapping of a sentiment perception product having sentiment/expression characterizations to semantic interaction fragments includes highlighting, marking, or otherwise augmenting time ranges of an interaction; this can be used to generate a unique representation of digital interactions ([0129]).] (NOTE: The sentiment perception product, which includes highlighting is equivalent to “combining the first portion of the interaction with the second portion of the interaction into a condensed version of the interaction.”) “present the condensed version of the interaction in at least one of: a transcript, an audio format, a video format, or an audiovisual format” (paragraph [0130]). [The sentiment perception heat map of the interaction, which is segmented by relevant contextual/semantic interaction fragments, could be generated and used as a report on the interaction, used as feedback for one or multiple participants ([0130]).] (NOTE: The report on the interaction is equivalent to “a transcript.”) “One or more non-transitory, computer-readable storage media storing instructions for determining highlights of an interaction” as recited in Claim 8, and “A system for determining highlights of an interaction, the system comprising: at least one hardware processor; and at least one non-transitory memory storing instructions” as recited in Claim 15 (paragraph [0157]). Regarding Claim 2, Litvin teaches all the limitations of parent Claim 1. Litvin teaches: receiving a third portion of the interaction; receiving a third significance metric value for the third portion of the interaction; calculating, using the third significance metric value, whether the third portion of the interaction is a highlight; responsive to the third portion of the interaction being calculated to be a highlight, combining the third portion of the interaction into the condensed version of the interaction “receiving a third portion of the interaction” (paragraph [0025]). [The system includes: collecting media data from active participants during a digital interaction ([0025]).] (NOTE: The collected media data is equivalent to “a third portion of the interaction.”) “receiving a third significance metric value for the third portion of the interaction” (paragraph [0035]). [The system extracts semantically significant portions of a conversation between two participants, and the portions are identified, highlighted, and/or characterized; feedback from the process are generated and incorporated into specific digital tools ([0035]).] “calculating, using the third significance metric value, whether the third portion of the interaction is a highlight” (paragraph [0035]). [The system highlights semantically significant portions of a conversation between participants and generates a perception heat map to identify and mark interaction fragments corresponding to detected sentiments ([0035]).] (NOTE: The generation of the perception heat map during the highlighting process is equivalent to “calculating whether the third portion of the interaction is a highlight.”) “responsive to the third portion of the interaction being calculated to be a highlight, combining the third portion of the interaction into the condensed version of the interaction” (paragraphs [0026], [0129]). [The system and method include collecting media data from active participants during a digital interaction; extracting communication metrics, such as verbal and non-verbal communication cues of the participants and/or other communication content of an interaction, such as a transcript or interaction artifacts; processing the communication metrics; and building a sentiment perception product based on a combination of processed communication metrics ([0026]). Mapping of a sentiment perception product having sentiment/expression characterizations to semantic interaction fragments includes highlighting, marking, or otherwise augmenting time ranges of an interaction; this can be used to generate a unique representation of digital interactions ([0129]).] (NOTE: The sentiment perception product, which includes highlighting is equivalent to “combining the first portion of the interaction with the second portion of the interaction into a condensed version of the interaction.”) Regarding Claim 3, Litvin teaches all the limitations of parent Claim 1. Litvin teaches: “wherein one or more of: the first significance metric value or the second significance metric value is calculated using at least one of: a meeting score, a sentiment score, or an engagement score” (paragraph [0035]). [The system highlights semantically significant portions of a conversation between participants and generates a perception heat map to identify and mark interaction fragments corresponding to detected sentiments ([0035]).] (NOTE: The interaction fragments corresponding to detected sentiments is equivalent to a “sentiment score.”) Regarding Claim 5, Litvin teaches all the limitations of parent Claim 1. Litvin teaches: “wherein the first significance metric value and the second significance metric value are determined using at least one of: participant engagement, participant sentiment, audio sentiment, video sentiment, facial expressions, voice tone, or textual sentiment” (paragraph [0047]). [The system is used for identifying sentiment-significant fragments, segments, or other portions of digital interactions; such fragmenting can be based on analysis of multiple sources of media data and different forms of communication including both verbal and non-verbal communication ([0047]).] (NOTE: The verbal and non-verbal communication is equivalent to a “audio sentiment, video sentiment.”) Regarding Claim 5, Litvin teaches all the limitations of parent Claim 1. Litvin teaches: “wherein the first significance metric value and the second significance metric value are determined using at least one of: participant engagement, participant sentiment, audio sentiment, video sentiment, facial expressions, voice tone, or textual sentiment” (paragraph [0047]). [The system is used for identifying sentiment-significant fragments, segments, or other portions of digital interactions; such fragmenting can be based on analysis of multiple sources of media data and different forms of communication including both verbal and non-verbal communication ([0047]).] (NOTE: The verbal and non-verbal communication is equivalent to a “audio sentiment, video sentiment.”) Regarding Claim 6, Litvin teaches all the limitations of parent Claim 1. Litvin teaches: “wherein one or more of: the first significance metric value or the second significance metric value are determined using one or more machine learning models” (paragraph [0027]). [Machine learning, and/or other suitable analysis techniques to analyze and provide metric based insights into expressions of various sentiments during an interaction ([0027]).] “wherein the one or more machine learning models are configured to identify highlights using changes in values of one or more affective metrics of the interaction” (paragraph [0078]). [Non-verbal communication metrics, including facial feature movements, such as eye positioning, mouth positioning, cheek positioning; head movement; body pose features, such as movement of arm, leg, hand, elbow, shower, head, neck, and/or posture; health metrics, such as pulse, breathing rate; voice metrics, such as pitch are modeled through neural networks or other machine learning models to identify, classify, and/or interpret non-verbal communication metrics ([0078]). “wherein the one or more affective metrics include at least one of: level of agreement, level of disagreement, or sentiment of the interaction” (paragraph [0026]). [Machine learning and/or other suitable analysis techniques are used to analyze and provide metric based insights into expressions of various sentiments during an interaction ([0027]).] Regarding Claim 9, Litvin teaches all the limitations of parent Claim 8. Litvin teaches: “wherein the interaction is a video conference, and wherein the significance metric values are derived in real-time of one or more of: video feeds or audio feeds of the interaction” (paragraphs [0024], [0047]). [The system and method may be used within digital communications involving audio or video conversations between two or more humans ([0024]). The system is used for identifying sentiment-significant fragments, segments, or other portions of digital interactions; such fragmenting can be based on analysis of multiple sources of media data and different forms of communication including both verbal communication ([0047]).] (NOTE: The verbal and non-verbal communication is equivalent to a “audio sentiment, video sentiment.”) Regarding Claim 20, Litvin teaches all the limitations of parent Claim 15. Litvin teaches: “using the significance metric values of the one or more portions of the interaction, generate one or more of: metrics, alerts, or recommendations” (paragraph [0036]). [A presenter's spoken words, screen sharing content, content of a slide presentation, the speakers facial and body language, and the audience responses are all measured and used in generating a communication analysis and recommendations ([0036]).] Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or non-obviousness. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Litvin (US 2021/0271864 A1, hereinafter referred to as Litvin) in view of Faulkner et al. (US 2018/0227138 A1, hereinafter referred to as Faulkner). Regarding Claim 7, Litvin teaches all the limitations of parent Claim 1. Litvin does not teach: “wherein one or more of: the first significance metric value or the second significance metric value are determined using at least one of: a beginning of an event within the interaction, or an end of the event within the interaction” Faulkner teaches: “wherein one or more of: the first significance metric value or the second significance metric value are determined using at least one of: a beginning of an event within the interaction, or an end of the event within the interaction” (paragraph [0091]). [The end-of-meeting object embedded in the active conversation pane enables the user to view recorded content and/or interact with representations of notable events on the interactive timeline based on priority or significance without leaving the active conversation pane ([0091]).] Both Litvin and Faulkner teach teleconference systems with interactions between participants, and those systems are comparable to that of the instant application. Because the two cited references are analogous to the instant application, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains, to include in the Litvin disclosure, the ability to associate the significance information with the end of a meeting, as taught by Faulkner. Such inclusion would have increased the ability of the system to tie significance to the end of a meeting, and would have been consistent with the rationale of combining prior art elements according to known methods to yield predictable results to show a prima facie case of obviousness (MPEP 2143(I)(A)) under KSR International Co. v. Teleflex Inc., 127 S. Ct. 1727, 82 USPQ2d 1385, 1395-97 (2007). Potentially Allowable Subject Matter Claims 4, 10-12, and 16-17 are objected to as being dependent upon a rejected base claim, but would normally be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The subject matter not found in the prior art for Claim 4 is as follows: responsive to the first significance metric value satisfying the first threshold value, indicating the first portion of the interaction as a highlight; and responsive to the significance metric value satisfying the second threshold value, indicating the second portion of the interaction as a highlight. The subject matter not found in the prior art for Claim 10 is as follows: compare the significance metric values of the one or more portions of the current interaction with historical significance metric values of previous interactions occurring prior to the current interaction. The subject matter not found in the prior art for Claim 11 is as follows: receive a weight for each significance metric value; apply the received weights to the significance metric values. The subject matter not found in the prior art for Claim 12 is as follows: divide the interaction into a number of portions using at least one of: a speaker, a topic, a time, a non-speaking participant, a grouping of participants, a sentiment of one or more participants, an engagement of the one or more participants, or a reaction of the one or more participants. The subject matter not found in the prior art for Claim 14 is as follows: receive a user-defined number of portions to divide the interaction into; and divide the interaction into the user-defined number of portions. The subject matter not found in the prior art for Claim 16 is as follows: perform a longitudinal analysis of the interaction by comparing the significance metric values of the one or more portions of the interaction with scores from historical interactions. The subject matter not found in the prior art for Claim 17 is as follows: perform an enterprise analysis by comparing the significance metric values of the one or more portions of the interaction with scores from other interactions within an enterprise of the interaction. The subject matter not found in the prior art for Claim 18 is as follows: display the significance metric values of the one or more portions of the interaction in a location adjacent to corresponding portions of the interaction. The subject matter not found in the prior art for Claim 19 is as follows: wherein the one or more portions of the interaction have greater significance metric values than other portions of the interaction. However, since all the above claims are rejected as being directed to an abstract idea, they cannot be patent eligible. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. The additional prior art references listed on Form PTO-892 and not used in the prior art rejections are also relevant to this application. Any inquiry concerning this communication or earlier communications from the examiner should be directed to PHYLLIS A BOOK whose telephone number is (571)272-0698. The examiner can normally be reached M-F 10:00 am - 7:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, GLENTON BURGESS can be reached at 571-272-3949. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PHYLLIS A BOOK/Primary Examiner, Art Unit 2454 1 2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50 (January 7, 2019) (hereinafter "Revised Guidance") (https://www.govinfo.gov/content/pkg/FR-2019-01-07/pdf/2018-28282.pdf)
Read full office action

Prosecution Timeline

Sep 10, 2024
Application Filed
Jan 24, 2026
Non-Final Rejection — §101, §102, §103
Mar 24, 2026
Applicant Interview (Telephonic)
Mar 25, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592905
METHOD FOR DETERMINING NAT TRAVERSAL POLICY AND DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12587467
SYSTEM AND METHOD FOR PATH COMPUTATION SERVICE FOR A SERVICE AWARE VIRTUAL TOPOLOGY OVER A WIDE AREA NETWORK
2y 5m to grant Granted Mar 24, 2026
Patent 12581361
Optimizing Traffic Redirection Operations
2y 5m to grant Granted Mar 17, 2026
Patent 12580785
ENHANCED TECHNIQUES FOR REDUCING AUDIO FEEDBACK DURING COMMUNICATION SESSIONS
2y 5m to grant Granted Mar 17, 2026
Patent 12563446
WEIGHTED LOAD BALANCING FOR MULTI-LINK OPERATION
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
97%
With Interview (+14.3%)
2y 3m
Median Time to Grant
Low
PTA Risk
Based on 473 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month