DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
This is a final rejection in response to amendments/remarks filed on 02/26/2026. Claims 1 to 13 have been amended and claim 14 has been added. Claims 1-14 are pending and are examined herein.
Priority
This application has a foreign filed Japanese application #JP2021-147207, filed on 09-09-2021. The application has a date of availability of September 10, 2021.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-14 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1: Is the claim to a Process, Machine, Manufacture, or Composition of Matter?
Claims 1-12, 14: A feedback device comprising a computer processor that provides feedback in an online meeting with participants, the computer processor coupled with a region of a storage unit and configured to control information used for feedback, wherein the computer processor is configured to perform steps of:
Claim 13: A non-transitory computer readable medium storing a program that when executed by a processor causes a computer with the processor to function as a feedback device that provides feedback based on an online meeting with participants, the program causing the computer to function as:
All of the claims fall under at least potentially eligible subject matter category, at least “machine, or manufacture,” therefore the claims are to be further analyzed under step 2.
Step 2a Prong 1: Is the claim reciting a Judicial Exception(A Law of Nature, a Natural Phenomenon (Product of Nature), or An Abstract Idea?)
The claims under the broadest reasonable interpretation in light of the specification are analyzed herein. Representative claims 1, and 13 are marked up, isolating the abstract idea from additional elements, wherein the abstract idea is in bold and the additional elements have been italicized as follows:
Claim 1-12, 14 Preamble: A feedback device comprising a computer processor that provides feedback in an online meeting with participants, the computer processor coupled with a region of a storage unit and configured to control information used for feedback, wherein the computer processor is configured to perform steps of:
Claim 13 Preamble: A non-transitory computer readable medium storing a program that when executed by a processor causes a computer with the processor to function as a feedback device that provides feedback based on an online meeting with participants, the program causing the computer to function as:
Claim 1 (Also representative of claim 13):
acquiring meeting data related to the meeting;
acquiring a type of feature points to be extracted from the meeting data, and a judgment criterion for the type of feature points, as criterion information;
extracting feature points included in the acquired meeting data in relation to the type included in the acquired criterion information;
comparing the extracted feature points with the judgment criterion included in the acquired criterion information;
creating a notification for a relevant party involved in the meeting, based on a result of the comparing:
outputting the created notification to a relevant party,
wherein the participants include an applicant who wishes to belong to a predetermined organization, and an interviewer who belongs to the predetermined organization and who conducts an interview with the applicant,
wherein the meeting is an interview with the applicant, conducted by the interviewer,
wherein the criterion information includes, as a type of the feature points, one or more types for assessing an ability to elicit information, one or more types for assessing an ability to judge appropriately, and one or more types for assessing an ability to attract applicants, and
wherein:
the ability to elicit information includes the interviewer's ability to elicit necessary information from the applicants,
the ability to judge appropriately includes the interviewer's ability to select appropriate applicants for the next stage of selection or job offers, and
the ability to attract applicants includes the interviewer's ability to increase the applicant's favorable impression of the predetermined organization.
When evaluating the bolded limitations of the claims under the broadest reasonable interpretation in light of the specification, it is clear that representative claims 1, and 13 recite subject matter that falls within the abstract idea category of “certain methods of organizing human activity.” More specifically, the present claims fall under the sub-grouping “managing personal behavior or relationships or interactions between people” include social activities, teaching, and following rules or instructions as outlined in MPEP 2106.04(a)(2)(II)(C). The claims, specifically the language in bold, merely recites acquiring interaction data in the form of meeting data, extracting feature points and judgment criterion, comparing the feature points with the judgment criterion, and creating and outputting a notification for the relevant party. This is no more than merely managing interactions between people, resulting in a set of instructions to a person which falls within “certain methods of organizing human activity.” For example, in the specification [0064], “As illustrated in Fig. 4, the notification creation unit 21, for example, creates a notification "You look serious. Have a smile" for a participant having a low smile ratio. The notification creation unit 21 creates a notification "You look nice!" for a participant having a high smile ratio.” This is by definition, managing personal behavior by providing instructions to a person in order to facilitate their interactions with other people.
Even when considering the amended limitations, these limitations merely limit the analyzed interactions to be within the context of a job interview, which still falls within “certain methods of organizing human activity,” as an interview is merely an assessment of an interaction or personal behavior. Furthermore, the amendments to the criterion information being, “the ability to elicit information includes the interviewer's ability to elicit necessary information from the applicants,
the ability to judge appropriately includes the interviewer's ability to select appropriate applicants for the next stage of selection or job offers, and the ability to attract applicants includes the interviewer's ability to increase the applicant's favorable impression of the predetermined organization” still falls under the abstract idea category because these merely define the types of behaviors that are being analyzed as input data. Merely limiting the types of user interactions/behavior still falls within “managing personal behavior, or interactions, or relationships” between people.
Therefore, the claims recite an abstract idea under “certain methods of organizing human activity” and are to be further analyzed under step 2A Prong 2.
Step 2A Prong 2: Does the claim recite additional elements that integrate the judicial exception into a practical application?
Claims 1 and 13 recite the following additional elements:
- A feedback device comprising a computer processor, the computer processor coupled with a region of a storage unit, wherein the computer processor is configured to perform steps of: in claim 1
- A non-transitory computer readable medium storing a program that when executed by a processor causes a computer with the processor to function as a feedback device in claim 13
The additional elements listed above, when considered individually and in combination with the claim as a whole, no more than a recitation of the words “apply it” (or an equivalent) or mere instructions to implement an abstract idea or other exception on generic computing components as outlined in MPEP 2106.05(f). In this case, the abstract idea of “acquiring interaction data in the form of meeting data, extracting feature points and judgment criterion, comparing the feature points with the judgment criterion, and creating and outputting a notification for the relevant party” is being performed on generic computing components such as a feedback device, a computer processor, a region of a storage unit, a non-transitory computer readable medium, and a computer.
It is evident in [0047], that the feedback device is no more than a generic computer performing the economic tasks in order to execute an abstract idea, “[0047]The feedback device 1, for example, is an information processing device such as a server.” Furthermore, as elaborated in the claim interpretation under 112(f) above, the various “units” are merely interpreted under 112(f) to any CPU capable of performing the functional tasks associated with the unit. For example in [0054], “[0054]The participant information acquisition unit 13, for example, is realized by the operation of a CPU.” Therefore, the claims are merely instructions to perform the abstract idea on a generic computing device.
Furthermore, there is no improvement to the technology or technological field purported in the specification or recited within the scope of the actual claim language. (See MPEP 2106.05(a) for Improvements to Technology or Technical Field). Even when considering these additional elements individually or in combination, they fail to integrate the abstract idea into a practical application. Therefore, the claims 1 and 13 are directed to an abstract idea without integration into a practical application.
Step 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception?
Claims 1 and 13 recite the following additional elements:
- A feedback device comprising a computer processor, the computer processor coupled with a region of a storage unit, wherein the computer processor is configured to perform steps of: in claim 1
- A non-transitory computer readable medium storing a program that when executed by a processor causes a computer with the processor to function as a feedback device in claim 13
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because, when considered separately and as an ordered combination, they do not add significantly more (also known as an “inventive concept”) to the exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a feedback device, a computer processor, a region of a storage unit, a non-transitory computer readable medium, and a computer to perform “acquiring interaction data in the form of meeting data, extracting feature points and judgment criterion, comparing the feature points with the judgment criterion, and creating and outputting a notification for the relevant party” amounts to no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Accordingly, even when viewed as a whole, nothing in the claim adds significantly more (i.e. an inventive concept) to the abstract idea. Thus claims 1, and 13 are not patent eligible because the claims are directed to an abstract without significantly more.
Dependent claims 2-12 and 14 are also given the full two part analysis both individually and in combination with the claims they depend on herein:
Claim 2 merely further limits the notification to be created in real time during a meeting with participants. This is more of the same abstract idea because the “real-time” limitation does not meaningfully limit the claims because it broadly limits the function to be performed during a meeting. Performing the abstract idea during an interaction between people is still an example of “managing personal behavior, relationships or interactions.” Furthermore, other than repeating the “notification creation unit” which is merely an “apply it” level additional element, no further additional elements have been recited in the claims. Therefore, whether analyzed individually, or as an ordered combination, or even when considering the claim as a whole including the claims depended upon, the claims are not meaningfully limited in a way that integrates the abstract idea into a practical application or provides significantly more. Therefore claim 2 is still patent-ineligible under 35 U.S.C. for being directed to an abstract idea without significantly more.
Claims 3 and 4 further limit the abstract idea by indicating the information indicating attributes of the participants(claim 3), and who the intended recipient of the outputs should be(claim 4). This is more of the same abstract idea because they are still managing the personal behavior or interactions between people, especially now that the participant background information is used, and the notification are specifically for a particular participant. There are no further additional elements to consider, and even when considering the additional elements along with the repeated additional elements are still examples of “apply it” level elements because they are mere instructions to apply the exception on a general purpose computer. Therefore, whether analyzed individually, or as an ordered combination, or even when considering the claim as a whole including the claims depended upon, the claims are not meaningfully limited in a way that integrates the abstract idea into a practical application or provides significantly more. Therefore claims 3-4 are still patent-ineligible under 35 U.S.C. for being directed to an abstract idea without significantly more.
Claims 5-9 merely further limit the abstract idea because they merely indicate the what kind of criterion information is being used to perform the comparison and notification creation. For example, claim 5 requires “a predetermined expression” (facial expression), claim 6 requires a “speech ratio,” claim 7 requires “a state of a person” (mental state), claim 8 requires “specific content of remarks” (the words being used), and claim 9 requires “characteristics of communication of the participants.” All of these are still more of the same abstract idea because they are all elements of managing personal behavior. T There are no further additional elements to consider, and even when considering the additional elements along with the repeated additional elements are still examples of “apply it” level elements because they are mere instructions to apply the exception on a general purpose computer. Therefore, whether analyzed individually, or as an ordered combination, or even when considering the claim as a whole including the claims depended upon, the claims are not meaningfully limited in a way that integrates the abstract idea into a practical application or provides significantly more. Therefore claims 5-8 are still patent-ineligible under 35 U.S.C. for being directed to an abstract idea without significantly more.
Claim 10 merely further limits the abstract idea by adding the steps of modifying the criteria based on the number and attributes of the participants. Basing the criteria on the volume of participants is still more of the same abstract idea because it merely indicates the amount or context of data being analyzed when “managing personal behavior.” Furthermore, the additional element of “modification unit” is merely an apply it level element because it encapsulates any generic computing device capable of performing the abstract idea. Therefore, whether analyzed individually, or as an ordered combination, or even when considering the claim as a whole including the claims depended upon, the claims are not meaningfully limited in a way that integrates the abstract idea into a practical application or provides significantly more. Therefore claim 10 is still patent-ineligible under 35 U.S.C. for being directed to an abstract idea without significantly more.
Claim 11 further limits the abstract idea by limiting the roles of the individuals in the abstract idea to job applicants, or recruiters, and limiting the criteria to particular skills such as “an ability to elicit information,” “ability to judge appropriately,” and “ability to attract participants.” This is more of the same abstract idea of “certain methods of organizing human activity” because limiting the interactions to a particular audience or the criterions to specifics skills is still an example of “managing personal behavior.” There are no further additional elements to consider, and even when considering the additional elements along with the repeated additional elements are still examples of “apply it” level elements because they are mere instructions to apply the exception on a general purpose computer. Therefore, whether analyzed individually, or as an ordered combination, or even when considering the claim as a whole including the claims depended upon, the claims are not meaningfully limited in a way that integrates the abstract idea into a practical application or provides significantly more. Therefore claim 11 is still patent-ineligible under 35 U.S.C. for being directed to an abstract idea without significantly more.
Claim 12 further limits the abstract idea by adding the steps of “each of the plurality of groups has at least one interview who has conducted one or more interviews, sets either the predetermined organization, the group, the interviewer or the meeting as a basis to extract the feature points” to the extraction, comparison, and notification creation steps. This is more of the same abstract idea because it merely bases the analysis on a specific context such as the interviewer, organization, or group, which means that it still an example of “managing personal behavior.” The additional element of a “basis setting unit” is still an apply it level element because it encapsulates any computer capable of performing the steps above. Therefore, whether analyzed individually, or as an ordered combination, or even when considering the claim as a whole including the claims depended upon, the claims are not meaningfully limited in a way that integrates the abstract idea into a practical application or provides significantly more. Therefore claim 12 is still patent-ineligible under 35 U.S.C. for being directed to an abstract idea without significantly more.
Claim 14 further limits the abstract idea by adding the steps of “setting one or more types selected by the user's operation further includes setting the judgment criterion selected by the user's operation for each of one or more types of the feature points that have been set, creating the notification includes creating the notification based on the result of the comparing using one or more types of the feature points and the judgment criterion,” which still falls within “certain methods of organizing human activity,” because it merely recites interactions between an individual and the computer to set the judgement criteria. Furthermore, the analysis steps are recited at such a high-level of generality that they encapsulate mere instructions to an individual. There are no further additional elements to consider, and even when considering the additional elements along with the repeated additional elements are still examples of “apply it” level elements because they are mere instructions to apply the exception on a general purpose computer. Therefore, whether analyzed individually, or as an ordered combination, or even when considering the claim as a whole including the claims depended upon, the claims are not meaningfully limited in a way that integrates the abstract idea into a practical application or provides significantly more. Therefore claim 14 is still patent-ineligible under 35 U.S.C. for being directed to an abstract idea without significantly more.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-14 are rejected under 35 U.S.C. 103 as being unpatentable over Peters et al. (US 20210076002 A1) hereinafter Peters, further in view of Workable (Nikoletta Bika, How to be a good interviewer, Dec 22 2020, Workable, https://resources.workable.com/stories-and-insights/how-to-be-good-interviewer)
Regarding Claims 1, 13:
Peters discloses an enhanced video conference management system that obtains participant data and provides various outputs to accomplish a particular target emotional or cognitive state. Peters teaches:
Claim 1 Preamble: A feedback device comprising a computer processor that provides feedback in an online meeting with participants, the computer processor coupled with a region of a storage unit and configured to control information used for feedback, wherein the computer processor is configured to perform steps of: (Peters [0064] The present disclosure focuses on a video conference management system, including a moderator system indicating in real-time the level and quality of participation of one or more participants within a multi-party video conference session by monitoring one or more characteristics observable through a media stream in order to stimulate collaboration and active engagement during the video conference. The moderator emphasizes mitigating and overcoming barriers created by providing feedback and/or interjecting actions which facilitate group collaboration.)
Claim 13 Preamble: A non-transitory computer readable medium storing a program that when executed by a processor causes a computer with the processor to function as a feedback device that provides feedback based on an online meeting with participants, the program causing the processor to perform steps of: (Peters [0006] In some implementations, a system can manage and enhance multi-party video conferences to improve performance of the conference and increase collaboration. The techniques can be implemented using one or more computers, e.g., server systems, and/or application(s) operating on various devices in a conference. In general, the system can monitor media streams from different endpoint devices connected to the conference, and enhance the video conference in various ways.)
Claim 1 Body (also representative of claim 13 body):
-acquiring meeting data related to the meeting; (Peters [0082] The input interface 22 configured to receive one or more media stream content comprised of audio and/or visual characteristics from one or more conference participant endpoints 12a-f. The one or more processors 16 are generally configured to calculate at least one measurement value indicative of a participation level based on one or more characteristics from the media stream at any given moment or over a period of time. The output interface 24 transmits at least one integrated representation of the measurement value to one or more conference participant endpoints 12a-f, which will be described in more detail below. )
- acquiring a type of feature points to be extracted from the meeting data,(Peters [0072] The moderator module 20 can use a number of analysis modules 110a-g to determine characteristics of the media stream. For example, these modules 110a-g can each determine feature scores 120 that reflect different attributes describing the media stream. For example, module 110a can determine a frequency and duration that the participant is speaking. Similarly, the module 110a can determine a frequency and duration that the participant is listening. The module 110b determines eye gaze direction of the participant and head position of the participant, allowing the module to determine a level of engagement of the participant at different times during the video conference. This information, with the information about when the user is speaking, can be used by the modules 110a, 110b to determine periods when the participant is actively listening (e.g., while looking toward the display showing the conference) and periods when the user is distracted and looking elsewhere. The module 110c performs pattern analysis to compare patterns of user speech and movement with prior patterns. The patterns used for comparison can be those of other participants in the current conference, patterns of the same participant in the same conference (e.g., to show whether and to what extent a user's attention and mood are changing), or general reference patterns known to represent certain attributes. The module 110d assesses intonation of speech of the participant, which can be indicative of different emotional states. The module 110a recognizes gestures and indicates when certain predetermined gestures are detected. The module 110f performs facial image or expression recognition, for example, indicating when a certain expression (such as a smile, frown, eyebrow raise, etc.) is detected. The module 110g performs speech recognition to determine words spoken by the participant. Optionally, the module 110g can determine whether any of a predetermined set of keywords have been spoken, and indicate the occurrence of those words as feature scores. )
- and a judgment criterion for the type of feature points, as criterion information; (Peters [0216] The system can show collaboration score, or some indicator of the collaboration score, for each participant being analyzed. The collaboration score can be a statistical function of emotion data and speaking time over a rolling time interval. Emotion data can be retrieved from an emotion recognition SDK. Happy and engaged emotions can contribute to a positive collaboration score, while angry or bored emotions can contribute to a low collaboration score. A speaking-to-listening-time ratio that is too high or too low relative to a predetermined threshold or range can detract from the collaboration score, but a ratio inside the predetermined range can contribute to a favorable score.) The scores, thresholds, ranges that indicate specific measures are mapped to “judgment criterion” because “judgment criterion” is given the broadest reasonable interpretation (BRI) of any measure that the extracted features are being compared to, (see present disclosure [0057]).
- extracting feature points included in the acquired meeting data in relation to the type included in the acquired criterion information; (Peters [0162] In any of the different arrangements discussed, the system can be used for live analysis during a communication session and post-processing analysis (e.g., based on recorded data after the communication session has ended)... In some cases, rather than recording video, data extracted from the video is recorded instead. For example, the system can calculate during the communication session and store, for each participant, data such as: a time series of vectors having scores for emotional or cognitive attributes for the participant over the course of the communication session (e.g., a vector of scores determined at an interval, such as each second, every 5 seconds, every 30 seconds, each minute, etc.); time-stamped data indicating the detected occurrence of gestures, specific facial expressions, micro-expressions, vocal properties, speech recognition results, etc.; extracted features from images or video, such as scores for the facial action coding system; and so on.)
- comparing the extracted feature points with the judgment criterion included in the acquired criterion information; (Peters [0077] The collaboration factor scores 140 output by the scoring module 130, optionally expressed as a vector, can be compared with reference data (e.g., reference vectors) representing combinations of collaboration factor scores (or combinations of ranges of collaboration factor scores) that are associated with different classifications. [0192] Various different techniques can be used to detect emotional or cognitive attributes of an individual from image or video information... Then, as image or video data comes in for a participant during a communication session, facial images can be compared with the reference data to determine how well the facial expression matches the various reference patterns. In some cases, feature values or characteristics of a facial expression are derived first (such as using scores for the facial action coding system or another framework), and the set of scores determined for a given face image or video snippet is compared with reference score sets for different emotions, engagement levels, attention levels, and so on. The scores for an attribute can be based at least in part on how well the scores for a participant's face image match the reference scores for different characteristics. [0255] The server system 1510 has access to a data repository 1512 which can store thresholds, patterns for comparison, models, historical data, and other data that can be used to assess the incoming video data. For example, the server system 1510 may compare characteristics identified in the video to thresholds that represent whether certain emotions or cognitive attributes are present, and to what degree they are present.)
- creating a notification for a relevant party involved in the meeting, based on a result of the comparing; (Peters [0078] The moderator module 20 can also store and access mapping data 160 that indicates video conference management actions to be performed, either directly by the moderator module 20 or suggested for a user (e.g., a meeting organizer) to perform. [0091] The moderator logic 32 may also include the function of determining and providing instructions regarding what action or course of action should take place in order to improve the conference participant composite scores, with emphasis on balancing the needs between the different participants in order to facilitate the most collaborative experience. )
-outputting the created notification to the relevant parties. (Peters [0233] FIG. 12A shows recommendations for conversation management with icons in the upper left corner. The different shapes and/or colors can signal different needs. This view shows icons associated with actions that should be taken to address the needs of team members or to facilitate overall collaboration. For example, the square may indicate that the person needs to talk less (e.g., they are dominating the conversation or having a negative effect on others), a triangle may indicate that the person needs to be drawn into the conversation, etc. [0292] The process 1700 includes providing, during the communication session, output data for display that includes the aggregate representation of the emotional or cognitive states of the set of multiple participants (1706).)
- wherein the participants include an applicant who wishes to belong to a predetermined organization, and an interviewer who belongs to the organization and conducts an interview with the applicant, (Peters [0205] In some implementations, the system can be used to monitoring interview to detect lying and gauge sincerity. For example, in a job interview, the system can evaluate a job candidate and score whether are the candidate is telling the truth.)
-wherein the meeting is an interview with the applicant, conducted by the interviewer, and (Peters [0205] The system can give feedback in real time or near real time. In some cases, the system can assess overall demeanor and cultural fit. Typically, this process will use micro-expression detection data. Certain micro expressions, alone or in combination can signal deception, and this can be signaled to the interviewer's device when detected.)
- the criterion information includes, as a type of the feature points, one or more types for assessing an ability to elicit information, (Peter [0372] Table 2055 include scores that indicate the different effects of different content items. The type of analysis represented here can be used to determine the effect of specific content items, such as a specific presentation slide, document, topic, keyword, video clip, image, etc. This can be used to show which portions of a lesson or presentation are most impactful, which ones elicit positive responses or negative responses, and so on. As noted above, the time of presentation of the different content items can be tracked and recorded during the communication session, and both participant reactions in the short term (e.g., within 30 seconds, 1 minute, 5 minute) and overall results (e.g., engagement, emotion levels, outcomes, etc. for the entire communication session) can be used in the analysis.)
- one or more types for assessing an ability to judge appropriately, and(Peters [0205] For example, in a job interview, the system can evaluate a job candidate and score whether are the candidate is telling the truth.) Assessing whether a candidate is telling the truth is an example of “assessing an ability to judge appropriately.”
- one or more types for assessing an ability to attract applicants. (Peters [0310] This information can show what communication styles or techniques most lead to the interest of the user or maintain the engagement of the user, and which styles or actions negatively affect the user and should be avoided. As a result, by observing a person's interactions over time, the system can automatically build a profile of the user's communication preferences, based on the outcomes the system observed as measures of the user's emotional and cognitive state. [0331] The system can be used to promote any of various different outcomes. Examples include, but are not limited to, participants completing a task, participants completing a communication session ... high scores for participant satisfaction for a communication session (e.g., in a post-meeting survey), acquisition of a skill by participants, retention of information from the communication session by participants, high scores for participants on an assessment (e.g., a test or quiz for material taught or discussed in a communication session, such as a class or training meeting), participants returning to a subsequent communication session,) Providing styles or techniques that lead to the interest of the user falls within the scope of “assessing an ability to attract applicants.” Providing high scores for participant satisfaction and retention of information, and participants returning to a subsequent communication session are all types of measures for assessing an ability to attract participants.
However, Peters fails to teach or suggest:
wherein:
the ability to elicit information includes the interviewer's ability to elicit necessary information from the applicants,
the ability to judge appropriately includes the interviewer's ability to select appropriate applicants for the next stage of selection or job offers, and
the ability to attract applicants includes the interviewer's ability to increase the applicant's favorable impression of the predetermined organization.
However, Workable discloses an article titled “How to be a good interviewer” which suggests certain metrics, for assessing interviewers. Workable suggests:
the ability to elicit information includes the interviewer's ability to elicit necessary information from the applicants,(Workable [Page 4] Be methodical
Unstructured interviews (that feel like free-flowing conversations that lack an agenda) can easily become subjective and non-job-related. Unstructured interviews help candidates feel more comfortable, but they don’t result in the best hiring decisions.
Adding some structure to your interviews will make them more effective. Even if you don’t have time to structure your interviews completely, try to simulate a structured interview as much as possible:
Choose questions carefully. Generic interview questions (like “what’s your greatest weakness?”) are overused and brain teasers are ineffective. Prepare a short list of questions tailored to the role you’re hiring for. Behavioral and situational questions help you judge a candidate’s soft skills (like problem-solving and critical thinking.) Aim to ask the same questions to all candidates and be aware of illegal questions to avoid.) The descriptions in Workable fall within the scope of the limitation.
the ability to judge appropriately includes the interviewer's ability to select appropriate applicants for the next stage of selection or job offers, and(Workable [Page 4] Rate candidates’ answers with a consistent scale. A ‘poor’ to ‘excellent’ or ‘low’ to ‘high’ scale can work well. To reduce the halo effect, use your notes to rate all candidates’ answers at the same time, after conducting all of your interviews, instead of rating candidates individually right after each interview. Rate every candidate on one question, before moving to the next question. [Page 5] Improve your judgement
Unconscious biases can cloud our judgement and lead us to wrong decisions. Combating those biases is key for good interviewers. Here are some ideas to achieve this:
Take an Implicit Association Test (IAT.) The first step in fighting biases is becoming aware of them. Harvard’s IAT can help you become more aware of your biases.
Learn how cognitive biases work. Understanding different kinds of bias can help you recognize them when they’re at work.
Think about your unique prejudices. Personal concerns, preferences and experience may interfere with our judgement. For example, if an interviewer believes that overqualified employees will eventually get bored with their job, they may refuse to hire them. That way, they may miss out on talented people who might still have been valuable team members.
Slow down. Resist the urge to made a decision about a candidate before their interview ends. It’s best to make your decisions after you’ve met all candidates and have consulted your notes.
Distrust body language cues. Body language isn’t an exact science; some non verbal cues may indicate many different things and vary across cultures.
Team up with someone. If possible, ask one of your team members to join you when interviewing candidates. Your team member’s unique perspective paired with your own can help you make more informed and objective hiring decisions.
Ask your teammates who are responsible for tracking recruiting metrics for information about candidate experience and quality of hire metrics.)
the ability to attract applicants includes the interviewer's ability to increase the applicant's favorable impression of the predetermined organization.(Workable [Page 3] Interviewing is hard work, but getting to hire great people and strengthening your employer’s brand is worthwhile [Page 5] Keep records. Recording and filing your notes helps you as an interviewer since you can refer back to them any time. And your company can also use them in court, in the unlikely event that they face a lawsuit.
Monitor results. Ask your teammates who are responsible for tracking recruiting metrics for information about candidate experience and quality of hire metrics.)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to modify Peters with the teachings of Workable, specifically the certain metrics that indicates a good interviewer. By simply modifying Peter’s criterion information to align with the teachings of Workable, one would predictably arrive at the performing the claimed limitations. One of ordinary skill in the art would have been motivated by the benefit of accurately assessing an interviewer’s performance as taught by Workable in order to help improve an interviewer. (Workable [Page 5] A good interviewer views mistakes and failures as opportunities to improve. Here are a few things you can do to learn from your interviewing experience more deliberately
Keep records. Recording and filing your notes helps you as an interviewer since you can refer back to them any time. And your company can also use them in court, in the unlikely event that they face a lawsuit.)
Regarding Claim 2:
Peters in view of Workable teaches The feedback device according to claim 1,
- wherein the creating the notification includes creating a notification in real-time during an online meeting with participants. (Peters [0116] Once the system determines the emotional states and emotional reactions of participants in a communication session, the system can provide feedback during the communication session or later. For example, the system can be used in videoconferencing to provide real-time indicators of the current emotional states, reactions, and other characteristics of participants in a video conference.)
Regarding Claim 3:
Peters in view of Workable teaches The feedback device according to claim 1, wherein the computer processor is configured to perform further steps of:
-acquiring participant information indicating attributes of the participants; and (Peters [0340] The system 1510 can obtain other information related to the communication sessions, such as context data 2002 that indicates contextual factors for the communication session as a whole or for individual participants. For example, the context data 2002 can indicate companies or organizations involved, a purpose or topic of a meeting, the time that the meeting occurred, total number of participants in the meeting, and so on. For individual participants, the context data 2002 may indicate factors such as background noise present, type of device used to participate in the communication session, and so on... [0345] The system 1510 can also analyze the context 2014 for individual participants or for a communication session generally to identify how contextual factors (e.g., time, location, devices used, noise levels, etc.) correlate with other aspects of the communications sessions that are observed. The system 1510 can also analyze the attributes of participants 2015 to determine how various participant attributes (e.g., age, sex, education level, location, etc.) vary their development of emotional and cognitive states and achievement of different outcomes. )
-determining a judgment criterion to be applied to the participants, based on the acquired participant information, (Peters [0345] In stage 2010, the system 1510 performs analysis to identify elements present in different communication sessions and the timing that the elements occur. For example, the system 1510 can analyze participation and collaboration 2011, to determine which participants were speaking at different times, the total duration of speech of different participants, the distribution of speaking times, the scores for participation and collaboration for different participants at different portions of the communication sessions, and so on. The system 1510 can analyze records of participant actions 2012, and correlate instances of different actions with corresponding communication sessions and participants...The system 1510 can also analyze the attributes of participants 2015 to determine how various participant attributes (e.g., age, sex, education level, location, etc.) vary their development of emotional and cognitive states and achievement of different outcomes. [0353] For example, based on the various example communication sessions, the system can train a neural network or classifier to receive input indicating one or more target outcomes that are desired for a communication session, and to then provide output indicating the emotional or cognitive states (e.g., attributes or combinations of attributes) that are most likely to promote the target outcomes. ) In this excerpt, Peters is performing analysis on the participation and collaboration based on when participations were speaking, total duration, etc. Since the judgment criterion varies based on contextual factors (time, location, devices used, noise levels, etc) then Peters teaches determining a judgment criterion to be applied to the participants based on the acquired participant information.
-wherein the comparing includes comparing extracted feature points with the determined judgment criterion. (Peters [0355] In stage 2030, the system 1510 uses the results of the analysis to provide feedback about communications sessions and to provide recommendations to improve communication sessions. One type of output is real-time feedback 2031 and recommendations during a communication session. For example, from the analysis, the system 1510 can determine the emotional and cognitive states that have led to the most effective learning for students. During an instructional session, the system 1510 can compare the real-time monitored emotional and cognitive states of students in the class with the profile or range of emotional and cognitive states predicted to result in good learning outcomes. When the system determines that the students' emotional and cognitive states are outside a desired range for good results, the system 1510 can generate a recommendation for an action. [0354] Some scores, such as for content items, presentation techniques, speaking styles, and so on can be scored to indicate their effectiveness at leading to particular emotional or cognitive states or to particular outcomes. Other scores can be assigned to individual presenters and participants to indicate how well the individuals are achieving desired outcomes, whether those outcomes are within the communication session (e.g., maintaining a desired level of engagement, attention, or participation among a class) ) In Peter’s, comparing the real-time monitored emotional and cognitive states (which is mapped to extracted feature points) with the profile or range of emotional and cognitive states (judgment criterion). Peter’s range of emotional and cognitive states also satisfies the “determined judgment criterion” because it is based on indicators and scores to indicate how well the desired outcomes are being achieved.
Regarding Claim 4:
Peters in view of Workable teaches The feedback device according to claim 3,
-wherein the criterion information further includes, specific information for specifying a relevant party as an output destination to whom the notification is to be outputted, and (Peters [0377] Many examples herein emphasize the impact of inducing emotional or cognitive states in general participants, such as audience members, students in a class, potential clients, etc. It can also be helpful or important to assess which cognitive or emotional states in presenters or other participants with special roles (e.g., teachers, salespeople, moderators, etc.) promote or discourage desired outcomes. For example, for a salesperson at a certain company, the system may determine that a particular range of enthusiasm, happiness, sadness, or other attribute leads to improved outcomes, while high scores for another attribute may lead to lower outcomes. The system can thus recommend emotional and cognitive states to be targeted for presenters or other roles, which may be the same as or different from those desired for other participants, as well as communication session elements or context elements that are predicted to promote the desired emotional or cognitive states of the presenters. [0357] This type of report may be generated based on the analysis of recorded communication sessions, or based on groups of communication sessions. For example, the report may aggregate information about multiple class sessions for a class or teacher, and provide a recommendations for that class or teacher.) Indicating the user’s roles is part of “criterion information,” and since the recommendations can be targeted for “presenters or other roles” this is an example of specifying a relevant party as an output destination to whom the notification (recommendation) is to be outputted. Another example in 0357 determines a particular class or teacher to send the recommendation to.
-the outputting further includes determining a participant as the output destination, based on the acquired participant information and the acquired specific information. (Peters [0320] In step 1814, the system provides emotional intelligence feedback and cues to each participant, optimized based on all the emotional maps of the participants. This can involve notifying users of, or recommending actions based on, hot-button topics, interpersonal dynamics, mood tendencies, recent interactions, and so on. Without revealing the personal information of the meeting participants, the video-conferencing system can give prompts, cues, and metrics that enable meeting participants to behave in ways that will be most compatible with their co-attendees emotional states and preferences. For example, the facilitator may be encouraged to call on more reserved members to speak early in the meeting. Speakers may be encouraged to use less aggressive language if interacting with a participant whose emotional map shows they experienced significant stress in their last meeting. Participants who tend to dominate may be prompted to hold their comments until half of the scheduled meeting time has transpired. ) This excerpt in Peter’s is an example of determining a participant as an output destination, for example a participant dominating the time will be prompted (notified) to hold off on their comments. The fact that these notifications are based on “interpersonal dynamics, mood tendencies, recent interactions” satisfies the limitation “based on the acquired participant information and acquired specific information.”
Regarding Claim 5:
Peters in view of Workable teaches The feedback device according to claim 1,
- wherein the criterion information includes a predetermined expression as a type of the feature points, (Peters [0192] In some cases, reference data indicating facial features or characteristics that are indicative of or representative of certain emotions or other attributes are determined and stored for later use.)
-the extracting feature points includes extracting the predetermined expression included in the meeting data, as a feature point, (Peters [0192] In some cases, feature values or characteristics of a facial expression are derived first (such as using scores for the facial action coding system or another framework), and the set of scores determined for a given face image or video snippet is compared with reference score sets for different emotions, engagement levels, attention levels, and so on. The scores for an attribute can be based at least in part on how well the scores for a participant's face image match the reference scores for different characteristics. [0193] As another example, machine learning models can be trained to process feature values for facial characteristics or even raw image data for a face image. To train a machine learning model, the system may acquire various different example images showing different individuals and different emotional or cognitive states.)
- the comparing includes comparing the extracted expression with the judgment criterion, and (Peters [0192] Then, as image or video data comes in for a participant during a communication session, facial images can be compared with the reference data to determine how well the facial expression matches the various reference patterns.)
- the creating the notification includes creating a notification, in which the predetermined expression has been evaluated, based on a result of the comparing. (Peters [0286] The data may be provided in any appropriate form, such as numerical values to adjust a user interface element (e.g., such as a slider, dial, chart, etc.), markup data specifying visual elements to show the aggregate representation, image data for an image showing an aggregate representation, and so on. In some cases, the system can cause the presenter to be notified of the aggregate representation (e.g., when it reaches a predetermined threshold or condition) using an audio notification, a haptic notification, or other output.)
Regarding Claim 6:
Peters in view of Workable teaches The feedback device according to claim 1,
- wherein the criterion information includes a speech ratio as a type of the feature points, (Peters [0226] FIG. 9C shows a timeline theme view that arranges indicators of different participants (in this case face images or icons) according to their speaking time. This view, focused on speaking time, shows the relative amounts of time that each participant has used. The vide shows faces or icons ordered along a scale from low speaking time to high speaking time, from right to left. [0185] The system can also provide information about conditions detected over the course of the recorded interaction, such as participant Dave being confused at position 23:12 (e.g., 23 minutes, 12 seconds) into the interaction, and participant Sue appearing to be bored from 32:22 to 35:54. Many other statistics and charts can be provided, such as a speaking time metrics for individuals or groups, a histogram of speaking time, a chart or graph of speaking time among different participants over time,)
- the extracting feature points includes extracting the speech ratio of the participants, as a feature point, the comparing includes comparing the extracted speech ratio with the judgment criterion, and (Peters [0139] The speaking time data can be provided to individuals during a communication session to facilitate collaboration in real time during the session. For example, individual participants may be shown their own duration of speaking time in the session or an indication of how much of the session they have been the speaker. Participants may be shown the distribution of speaking times or an indication of relative speaking times of the participants. As another example, participants can be shown a classification for the speaking times in the session, e.g., balanced, unbalanced, etc.)
- the creating the notification includes creating a notification encouraging a balance, based on a result of the comparing. (Peters [0141] Notification to the group leader or meeting host is also an important use. The leader or moderator is notified in many implementations when individuals or sub-groups are detected to be falling behind in the conversation.)
Regarding Claim 7:
Peters in view of Workable teaches The feedback device according to claim 1,
- wherein the criterion information includes a state of person, as a type of the feature points, (Peters [0243] The system 1500 provides a presenter 1501 information about the emotional and cognitive state of the audience, aggregated from information about individual participants. During the communication session, a device 1502 of the presenter 1501 provides a user interface 1550 describing the state of the audience (e.g., emotion, engagement, reactions, sentiment, etc.). )
- the extracting feature points includes extracting a state of the participants, as a feature point, (Peters [0310] As a result, by observing a person's interactions over time, the system can automatically build a profile of the user's communication preferences, based on the outcomes the system observed as measures of the user's emotional and cognitive state. [0313] In step 1804, as users participate in the virtual communication session, emotional intelligence and context data is compiled, filtered, and summarized for the user. Various types of data can be collected for a communication session, such as (1) a transcript of the conversation (entire or key-word summary), (2) facial expression data, emotional responses, cognitive attributes, etc., (3) voice stress analysis, and (4) speaking times for participants, as well as potentially biometric or physiological data (e.g., heart rate and blood pressure) gathered from Internet-of-Things (IOT) devices such as wearable devices. Data can be gathered for all participants in the communication session, not only to be able to determine an emotional map cookie for each participant but also to show how each individual reacts to the emotions and actions of the other participants. The processing of this data extracts key responses and events, filters out conditions that are not important, and summarizes the user's emotional and cognitive attributes and actions in the communication session. )
- the comparing includes comparing the extracted state with the judgment criterion, and (Peters [0355] One type of output is real-time feedback 2031 and recommendations during a communication session. For example, from the analysis, the system 1510 can determine the emotional and cognitive states that have led to the most effective learning for students. During an instructional session, the system 1510 can compare the real-time monitored emotional and cognitive states of students in the class with the profile or range of emotional and cognitive states predicted to result in good learning outcomes. )
- the creating the notification includes creating a notification providing advice, based on a result of the comparing. (Peters [0243] This provides the presenter 1501 real-time feedback during the communication session to help the presenter 1501 determine the needs of the audience and adjust the presentation accordingly. The information can be provided in a manner that shows indications of key elements such as engagement and sentiment among the audience, so the presenter 1501 can assess these at a glance. The information can also show how the audience is responding to different portions of the presentation. In an educational use, the information can show which topics or portions of a lesson are received. For example, low engagement or high stress may indicate that the material being taught is not being effectively received. [0355] When the system determines that the students' emotional and cognitive states are outside a desired range for good results, the system 1510 can generate a recommendation for an action to improve the emotional and cognitive states of the students, and thus better facilitate the desired educational outcomes. The action can be selected by the system 1510 based on scores for outcomes, based on output of a machine learning model, or other technique. The system 1510 then sends the recommendation for presentation on the teacher's client device.)
Regarding Claim 8:
Peters in view of Workable teaches The feedback device according to claim 1,
- wherein the criterion information includes specific content of remarks as a type of the feature points, (Peters [0389] FIG. 21B is a flow diagram showing an example of a process 2150 for providing recommendations for improving a communication session and promoting a target outcome. The process 2150 includes identifying a target outcome (2152), which can be an action of a participant in a communication session or result that is separate from the communication session. The system determines one or more communication session factors that are predicted to promote the target outcome (2154). For example, the system can identify emotional and cognitive states of participants that are predicted to promote the target outcome (2154A). The system can identify communication session factors that are predicted to promote the identified emotional and cognitive states among participants in a communication session (2154B). The factors that are determined can include actions of a participant (e.g., a teacher, presenter, moderator, etc.), characteristics of a communication session (e.g., time of day, duration, number of people, etc.), content (e.g., types of media, topics, keywords, specific content items, etc.), and others.) Since Peters teaches the criterion including “content” such as “keywords” and “specific content items,” the limitation has been satisfied.
-the extracting feature points includes extracting the specific content of remarks, as a feature point, (Peters [0300] The appropriate recommendation(s) for a given pattern or distribution of participant scores and/or aggregate representation may be determined through analysis of various different communication sessions. For different communication sessions, the scores at different points in time can be determined and stored, along with time-stamped information about the content of the communication session, e.g., presentation style (e.g., fast, slow, loud, soft, whether slides are shown or not, etc.), topics presented (e.g., from keywords from presented slides, speech recognition results for speech in the session, etc.), media (e.g., video, images, text, etc.), and so on. Audience characteristics (e.g., demographic characteristics, local vs. remote participation, number of participants, etc.) can also be captured and stored.)
- the comparing includes comparing the extracted content with the judgment criterion, and (Peters [0345] The system 1510 can analyze records of participant actions 2012, and correlate instances of different actions with corresponding communication sessions and participants. The system 1510 can analyze records of content 2013 of communication sessions, such as content presented, words or phrases spoken, topics discussed, media types used, and so on, to determine when different content occurred and how content items relate to other events and conditions in the communication sessions. The system 1510 can also analyze the context 2014 for individual participants or for a communication session generally to identify how contextual factors (e.g., time, location, devices used, noise levels, etc.) correlate with other aspects of the communications sessions that are observed. The system 1510 can also analyze the attributes of participants 2015 to determine how various participant attributes (e.g., age, sex, education level, location, etc.) vary their development of emotional and cognitive states and achievement of different outcomes.)
-the creating the notification includes creating a notification regarding the extracted content, based on a result of the comparing. (Peters [0147] The analysis of the system can help teachers and others identify elements that are effective and those that are not. This can be used to provide feedback about which teachers are most effective, which content and teaching styles are most effective, and so on. The analysis helps the system identify the combinations of factors that result in effective learning (e.g., according to measures such as knowledge retention, problem solving, building curiosity, or other measures), so the system can profile these and recommend them to others. Similarly, the system can use the responses to identify topics, content, and styles that result in negative outcomes, such as poor learning, and inform teachers and others in order to avoid them. When the system detects that a situation correlated with poor outcomes occurs, the system can provide recommendations in the moment to change the situation (e.g., recommendation to change tone, change topic, use an image rather than text content, etc.) and/or analysis and recommendations after the fact to improve future lessons (e.g., feedback about how to teach the lesson more effectively in the future).) Feedback on how to change the content (recommendation to change tone, topic, or use an image) is an example of creating a notification regarding the extracted content.
Regarding Claim 9:
Peters in view of Workable teaches The feedback device according to claim 1,
- wherein the criterion information includes characteristics of communication of the participants, based on the content of remarks, as a type of the feature points, (Peters [0389] FIG. 21B is a flow diagram showing an example of a process 2150 for providing recommendations for improving a communication session and promoting a target outcome. The process 2150 includes identifying a target outcome (2152), which can be an action of a participant in a communication session or result that is separate from the communication session. The system determines one or more communication session factors that are predicted to promote the target outcome (2154). For example, the system can identify emotional and cognitive states of participants that are predicted to promote the target outcome (2154A). The system can identify communication session factors that are predicted to promote the identified emotional and cognitive states among participants in a communication session (2154B). The factors that are determined can include actions of a participant (e.g., a teacher, presenter, moderator, etc.), characteristics of a communication session (e.g., time of day, duration, number of people, etc.), content (e.g., types of media, topics, keywords, specific content items, etc.), and others.) Peter’s characteristics of a communication session, and content satisfies the limitation.
- the extracting feature points includes extracting the characteristics of communication of the participants, based on the content of remarks, as a feature point, (Peters [0300] The appropriate recommendation(s) for a given pattern or distribution of participant scores and/or aggregate representation may be determined through analysis of various different communication sessions. For different communication sessions, the scores at different points in time can be determined and stored, along with time-stamped information about the content of the communication session, e.g., presentation style (e.g., fast, slow, loud, soft, whether slides are shown or not, etc.), topics presented (e.g., from keywords from presented slides, speech recognition results for speech in the session, etc.), media (e.g., video, images, text, etc.), and so on. Audience characteristics (e.g., demographic characteristics, local vs. remote participation, number of participants, etc.) can also be captured and stored.)
- the comparing includes comparing the extracted characteristics with the judgment criterion, and (Peters [0345] The system 1510 can analyze records of participant actions 2012, and correlate instances of different actions with corresponding communication sessions and participants. The system 1510 can analyze records of content 2013 of communication sessions, such as content presented, words or phrases spoken, topics discussed, media types used, and so on, to determine when different content occurred and how content items relate to other events and conditions in the communication sessions. The system 1510 can also analyze the context 2014 for individual participants or for a communication session generally to identify how contextual factors (e.g., time, location, devices used, noise levels, etc.) correlate with other aspects of the communications sessions that are observed. The system 1510 can also analyze the attributes of participants 2015 to determine how various participant attributes (e.g., age, sex, education level, location, etc.) vary their development of emotional and cognitive states and achievement of different outcomes.)
-the creating the notification includes creating, as a notification, an approach of responding to the characteristics of communication as extracted based on a result of judgment obtained through the comparing. (Peters [0310] The emotional map cookie can also include data that show emotional habits of an individual, based on data aggregated across multiple communication sessions. The system can then inform others of the best way to interact with a user, e.g., topics or tones to use, and which to avoid, and in effect coach other participants into the proper behavior to have successful communication with the user. From various interactions, the system can determine a map of norms or preferences for each person, to show how others most successfully interact with the person. This information can show what communication styles or techniques most lead to the interest of the user or maintain the engagement of the user, and which styles or actions negatively affect the user and should be avoided. As a result, by observing a person's interactions over time, the system can automatically build a profile of the user's communication preferences, based on the outcomes the system observed as measures of the user's emotional and cognitive state. This allows the emotional map cookie to determine, for example, whether a person responds best to an excited tone or an even, measured tone; whether the person prefers short meetings or long ones; which actions or emotions lead the person to engage or collaborate; whether the user prefers small meetings or larger ones; which actions are most effective at diffusing anger or increasing attention of the user; and so on.) Suggesting various topics or tones to use, or which communication styles or techniques to maintain the engagement of the user is an example of creating a notification indicating an approach to responding to the characteristics of communication.
Regarding Claim 10:
Peters in view of Workable teaches The feedback device according to claim 1,
- wherein the computer processor is configured to perform further steps of modifying content of the criterion information acquired based on the number and attributes of the participants. (Peters [0301] The system can use the current duration of the communication session, along with other factors, to select the recommendation most appropriate for the current situation. Thus, the recommendations provided can help guide the presenter to techniques that are predicted, based on observed prior communication sessions, to improve emotional or cognitive states given context of, e.g., the current emotional or cognitive profile or distribution of the audience, the makeup of the audience (e.g., size, demographics), the type or purpose of the communication session (e.g., online class, lecture, videoconference, etc.), and so on. [0369] The system 1510 can determine which relationships are present for different contexts or situations, allowing the system 1510 tailor the actions recommended for the situation that is present in a communication session, as well as the desired outcomes or desired emotional states to be promoted. Thus the scores for different elements of communication sessions may vary based on factors such as the type of participant, the type of meeting (e.g., a sales pitch, a classroom, a competition, etc.), a size of the meeting, the goal or objective of the meeting, etc. )
Regarding Claim 11:
Peters in view of Workable teaches The feedback device according to claim 1,
- wherein the computer processor is configured to execute setting, via a user’s operation as the type of feature points, (Peters [0303] These changes can be done for individuals, groups of participants, or for all participants, and can help address situations such as low engagement due to technical limitations, such as jerky video, network delays and so on. For example, if the system detects that undesirable emotional or cognitive attributes or patterns coincide with indicators of technical issues (such as delays, high participant device processor usage, etc.), then the system can adjust the configuration settings for the communication session to attempt to improve engagement and emotion among the participants and facilitate more effective communication. [0186] Any and all of the different system architectures discussed herein can include features to enforce privacy and user control of the operation of the system. The end user can be provided an override control or setting to turn emotion analysis off. For privacy and control by the user, there may be a user interface control or setting so the participant can turn off emotion analysis, even if processing is being done at a different device (e.g., a server or a remote recipient device).)
-one or more types for assessing the ability to elicit information(Peter [0372] Table 2055 include scores that indicate the different effects of different content items. The type of analysis represented here can be used to determine the effect of specific content items, such as a specific presentation slide, document, topic, keyword, video clip, image, etc. This can be used to show which portions of a lesson or presentation are most impactful, which ones elicit positive responses or negative responses, and so on. As noted above, the time of presentation of the different content items can be tracked and recorded during the communication session, and both participant reactions in the short term (e.g., within 30 seconds, 1 minute, 5 minute) and overall results (e.g., engagement, emotion levels, outcomes, etc. for the entire communication session) can be used in the analysis.)
- one or more types for assessing an ability to judge appropriately, and(Peters [0205] For example, in a job interview, the system can evaluate a job candidate and score whether are the candidate is telling the truth.) Assessing whether a candidate is telling the truth is an example of “assessing an ability to judge appropriately.”
- one or more types for assessing an ability to attract applicants. (Peters [0310] This information can show what communication styles or techniques most lead to the interest of the user or maintain the engagement of the user, and which styles or actions negatively affect the user and should be avoided. As a result, by observing a person's interactions over time, the system can automatically build a profile of the user's communication preferences, based on the outcomes the system observed as measures of the user's emotional and cognitive state. [0331] The system can be used to promote any of various different outcomes. Examples include, but are not limited to, participants completing a task, participants completing a communication session ... high scores for participant satisfaction for a communication session (e.g., in a post-meeting survey), acquisition of a skill by participants, retention of information from the communication session by participants, high scores for participants on an assessment (e.g., a test or quiz for material taught or discussed in a communication session, such as a class or training meeting), participants returning to a subsequent communication session,) Providing styles or techniques that lead to the interest of the user falls within the scope of “assessing an ability to attract applicants.” Providing high scores for participant satisfaction and retention of information, and participants returning to a subsequent communication session are all types of measures for assessing an ability to attract participants.
Regarding Claim 12:
Peters in view of Workable teaches The feedback device according to claim 1,
- wherein each of a plurality of groups in the predetermined organization has at least one interviewer who has conducted one or more interviews, (Peters [0386] In general, the results of the analysis may be determined for a single presenter or across multiple presenters; for a single communication session or multiple communication sessions; for effects on a single participant, a group of participants (e.g., a subset of those in the communication sessions), or across all participants; for a single content instance, for multiple content instances, for content instances of a certain category or type, etc. In many cases personalized or customized analysis tailored for a certain company, meeting type, or situation is important. For example, the culture of two different organizations may result in different emotional or cognitive states being needed to achieve good results for the different organizations, and analysis of the communication sessions for the two organizations may reveal that. [0205] In some implementations, the system can be used to monitoring interview to detect lying and gauge sincerity. For example, in a job interview, the system can evaluate a job candidate and score whether are the candidate is telling the truth. The system can give feedback in real time or near real time. In some cases, the system can assess overall demeanor and cultural fit. Typically, this process will use micro-expression detection data. Certain micro expressions, alone or in combination can signal deception, and this can be signaled to the interviewer's device when detected.) Since Peters teaches a plurality of groups in organizations that have particular cultural fits, and also talks about job interviewers assessing culture fit in [0205], then the limitation has been satisfied because the organization have job interviewers.
- the computer processor is configured to perform further steps of setting either the predetermined organization, the group, the interviewer, or the meeting, as a basis for the extracting feature points included in the meeting data to extract the feature points, (Peters [0190] In the case where the system is integrated with the video conferencing platform, the system can use data acquired from many meetings involving a participant, even meeting involving different individuals or companies. As a result, the system can develop norms/baselines for individuals, to personalize the system's analysis and customize the behavior of the system and improve accuracy. The system can look for and identify details about a person's reactions, behaviors, expressions, and so on and adjust over time. The results can be stored as a personalization profile for each user, to use the history of interactions for a user to do better analysis for that person.) A norm/baseline for individuals, companies, or data acquired from a meeting, is an example of a “basis setting unit” which is given the BRI of any benchmark/baseline/standard value as it pertains to a particular user.
- the extracting feature points included in the meeting data includes extracting feature points from one or more of the meeting data included in the set basis, the result of the comparing is based on the set basis, and (Peters [0314] The profile can also record information about past interactions, such as interactions in the recent past that may be affecting current emotions (e.g., emotions from the user's most recent call). Similarly, the profile can indicate the history with a particular group (e.g., the user's mood and reactions when last meeting with a particular person). The system can generate the profile to include emotional norms and preferences of the user, such as: (1) reaction to group size (e.g., differences in behavior or emotion for different numbers of participants); (2) emotion or interactivity cycles based on various factors (e.g., variations due to time-of-day, day-of-week, local weather, etc.); (3) tendencies of the user (e.g., toward domination or reservation); and (4) the user's ability to engage or inspire others.)
- the creating the notification includes creating a notification regarding the set basis. (Peters [0310] The emotional map cookie can also include data that show emotional habits of an individual, based on data aggregated across multiple communication sessions. The system can then inform others of the best way to interact with a user, e.g., topics or tones to use, and which to avoid, and in effect coach other participants into the proper behavior to have successful communication with the user. From various interactions, the system can determine a map of norms or preferences for each person, to show how others most successfully interact with the person. This information can show what communication styles or techniques most lead to the interest of the user or maintain the engagement of the user, and which styles or actions negatively affect the user and should be avoided.)
Regarding Claim 14:
Peters in view of Workable teaches The feedback device according to claim 11, wherein
- setting one or more types selected by the user's operation further includes setting the judgment criterion selected by the user's operation for each of one or more types of the feature points that have been set,(Peters [0190] In the case where the system is integrated with the video conferencing platform, the system can use data acquired from many meetings involving a participant, even meeting involving different individuals or companies. As a result, the system can develop norms/baselines for individuals, to personalize the system's analysis and customize the behavior of the system and improve accuracy. The system can look for and identify details about a person's reactions, behaviors, expressions, and so on and adjust over time. The results can be stored as a personalization profile for each user, to use the history of interactions for a user to do better analysis for that person. [0204] The system can look at changes in a person's voice over time. One of the thing that's powerful about micro expressions is consistency across ages and nationalities and gender. There are some commonalities in voice, but there may also be user-specific or location-specific or context-specific nuances. Many other factors like voice do have personal norms, language, regional and other effects. The system can store profile set or database of participant information, which characterizes the typical aspects of an individual's voice, face, expressions, mannerisms, and so on. The system can then recognize that the same person appears again, using the name, reference face data, or the profile itself, and then use the profile to better assess the person's attributes.)
- creating the notification includes creating the notification based on the result of the comparing using one or more types of the feature points and the judgment criterion. (Peters [0255] The server system 1510 has access to a data repository 1512 which can store thresholds, patterns for comparison, models, historical data, and other data that can be used to assess the incoming video data. For example, the server system 1510 may compare characteristics identified in the video to thresholds that represent whether certain emotions or cognitive attributes are present, and to what degree they are present. As another example, sequences of expressions or patterns of movement can be determined from the video and compared with reference patterns stored in the data storage 1512. [0286] When providing output data that includes or indicates the aggregate representation, this can be done as providing data that, when rendered or displayed, provides a visual output of the chart, graph, table, or other indicator. The data may be provided in any appropriate form, such as numerical values to adjust a user interface element (e.g., such as a slider, dial, chart, etc.), markup data specifying visual elements to show the aggregate representation, image data for an image showing an aggregate representation, and so on. In some cases, the system can cause the presenter to be notified of the aggregate representation (e.g., when it reaches a predetermined threshold or condition) using an audio notification, a haptic notification, or other output. )
Response to Arguments
Applicant's arguments filed 02/26/2026 have been fully considered but they are not persuasive.
Applicant’s amendments are no longer subject to an interpretation under 112(f) because they no longer recite a means plus function without corresponding structure. The examiner acknowledges support for all of the amendments in the claims.
Regarding arguments over 35 U.S.C. 101, the applicant’s amendments to claim 13 overcome the statutory 101 rejection because they are no longer directed to software per se, however, claim 13 is still rejected under 35 U.S.C. 101. The applicant asserts that the claims transcend the simple human activity of “managing personal behavior...” because the claims require acquiring, extracting, and comparing feature points from data; and relating the feature points to criterion information in order to improve a computer process of hiring potential employees. However, this argument is not persuasive because the fact that feature points are acquired, extracted, and compared from data does not go beyond “certain methods of organizing human activity,” especially when the analysis steps are recited at a high level of generality such that they encompass mere instructions to an individual. Furthermore, the improvement being to the “hiring potential employees” still falls within “certain methods of organizing human activity,” because hiring activity is no more than “managing personal behavior, interactions or relationships between people. The examiner notes that improvements to the abstract idea itself do not qualify as improvements that integrate the abstract idea into a practical application because the improvement must be lent by the additional elements. See MPEP 2106.05(a). Therefore, the applicant’s argument that these limits create better computer systems for hiring prospective employees is not persuasive because the improvements are to the abstract idea, and they are being “applied to” a general purpose computer. No improvements to computer functionality, a technology, or a technical field would have been apparent to a person of ordinary skill in the art when reviewing the original disclosure or the claims as reflected. Furthermore, regarding Steps 2A/2B regarding the additional elements, the applicant’s argument that the claims are performing a process that is beyond human thought “because the process is far more complicated and quick than what can be performed by a human mind.” This is not persuasive because the rejection does not rely on an assertion that the abstract idea is a mental process, therefore, even if the claims cannot be performed by the human mind, they still fall under “Certain methods of organizing human activity,” and are still directed to an abstract idea. Furthermore, even though a computer can perform tasks faster than the human mind, merely applying the abstract idea on a computer to speed up an otherwise mental process does not qualify as an improvement to computer functionality. Therefore, the applicant’s arguments that the “claimed/device is doing calculations much more complicated than can be performed by a human mind because the computer calculations are more intricate and complex than could be handled by a human,” is not persuasive because of the statements above. Furthermore, the examiner respectfully disagrees that a human mind will not be able to produce such “feedback in an online meeting with participants,” that is to the relevant party during an employee’s interview because the task itself can be performed in the human mind. However, this argument is not relevant to the discussion because the abstract idea category is not “mental processes.” Therefore, none of the applicant’s arguments over 35 U.S.C. 101 are persuasive and therefore claims 1-14 stand rejected for being directed to an abstract idea without significantly more.
Regarding applicant’s arguments over claim rejections under 35 U.S.C. 102 the applicant asserts that Peters fails to teach the amended limitations, specifically after further limiting “the ability to judge appropriately, and the ability to attract applicants.” The examiner agrees that Peters does not specifically teach “the ability to judge appropriately includes the interviewer’s ability to select appropriate applicants for the next stage of selection or job offers,” and “the ability to attract applicants includes the interviewer’s ability to increase the applicant’s favorable impression of the predetermined organization.” The examiner appreciates these further limitations, however, the claims are now rejected under 35 U.S.C. 103 because it would have been obvious to modify Peters to arrive at these limitations in view of the teachings of the NPL Workable. Please see the rejection above for more details. Therefore, the claims stands rejected under 35 U.S.C. 103 and the applicant’s arguments are now moot in view of the updated rejection because the combination of Peters and Workable teach or suggest each and every limitation.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
- Olivier et al. (US 20150120398 A1) discloses an assessment of top interviewers and interviewers who may need a bit more training and experience in screening the talent of job candidates during job interviews.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NICO LAUREN PADUA whose telephone number is (703)756-1978. The examiner can normally be reached Mon to Fri: 8:30 to 5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jessica Lemieux can be reached at (571) 270-3445. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/NICO L PADUA/ Junior Patent Examiner, Art Unit 3626
/JESSICA LEMIEUX/ Supervisory Patent Examiner, Art Unit 3626