DETAILED ACTION
1. This communication is in response to the Application No. 17/961,997 filed on October 7, 2022 in which Claims 1-6 are presented for examination.
Notice of Pre-AIA or AIA Status
2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
3. The listing of references in the specification is not a proper information disclosure statement. 37 CFR 1.98(b) requires a list of all patents, publications, or other information submitted for consideration by the Office, and MPEP § 609.04(a) states, "the list may not be incorporated into the specification but must be submitted in a separate paper." Examiner notes that Par. [0027-0028] of Applicant’s specification references a U.S. Patent and U.S. PG-PUB, which are not appropriately cited, as there is not an information disclosure statement submitted by the Applicant. Therefore, unless the references have been cited by the examiner on form PTO-892, they have not been considered.
Claim Objections
4. Claim 2 is objected to because of the following informalities:
The claim recites “[…] biosensor data from a biosensor device worm by the participant […]” but instead should recite “[…] biosensor data from a biosensor device worn by the participant […]” to correct the minor typographical error.
Appropriate correction is required. Examiner notes that Claim 3 depends on Claim 2, and thus is also objected to, by virtue of dependency.
Claim Rejections - 35 USC § 112
5. The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
6. Claims 1-6 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
7. The term “moment of interest” in Claims 1-6 is a relative term which renders the claim indefinite. The term “moment of interest” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. The claim does not define a threshold/degree/requisite as to what is considered a “moment of interest” of the participant’s interaction with the test session and/or what differentiates a “moment of interest” from a “detracting event” – thus, rendering the claims indefinite.
For the purpose of examination, Examiner interprets the term “moment of interest” to pertain to moments/periods of time within the test session, where the participant is engaged and/or displaying positive interest (i.e., eager, intrigued, curious, enthusiastic, etc.) based on the analysis of synthesized semantic data, eye tracking data, biosensor input data, and facial analysis, as recited by Independent Claim 1. Further, for the purpose of examination, Examiner interprets the term “detracting event” to pertain to moments/periods of time within the test session, where the participant is disengaged or displaying negative interest (i.e., indifferent, unconcerned, distracted, etc.) based on the analysis of synthesized semantic data, eye tracking data, biosensor input data, and facial analysis, as recited by Independent Claim 1. This interpretation is applied to the claim mapping within the 35 U.S.C. 103 rejection below.
8. Claim 1 recites the limitation "training the artificial intelligence" without any previous recitation of an artificial intelligence. There is insufficient antecedent basis for this limitation in the claim. This applies to Independent Claim 1 and Dependent Claims 2-6 by virtue of dependency. Thus, Claims 1-6 are indefinite for the reasons stated above. Examiner Note: Examiner additionally notes that Applicant lists various “classifying” and “identifying” steps that comprise the training of the supposed artificial intelligence model, but there is no recitation of which data is inputted into the model, in order to train it in the first place.
Claim Rejections - 35 USC § 101
9. 35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
10. Claims 1-6 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Regarding Claim 1:
Step 1: Claim 1 is a method claim. Therefore, Claims 1-6 are directed to either a process, machine, manufacture, or composition of matter.
2A Prong 1: If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas.
a method for analyzing qualitative remote user experience and usability test results […] (mental process – other than reciting “using artificial intelligence”, analyzing qualitative remote user experience and usability test results may be performed manually by a user observing/analyzing qualitative remote user experience and usability test results)
selecting at least one participant based on predetermined criteria (mental process – selecting at least one participant may be performed manually by a user observing/analyzing the predetermined criteria and accordingly using judgement/evaluation to select at least one participant based on said analysis of the predetermined criteria)
recording data of the at least one participant's interaction with a test session […] (mental process – other than reciting “through remote testing software”, recording data of the at least one participant’s interaction with a test session may be performed manually by a user observing/analyzing the participant’s interactions with a test session and accordingly using judgement/evaluation to record data (with the aid of pen and paper) based on said analysis of the participant’s interactions)
identifying a plurality of moments of interests of the participant's interaction from the test session by synthesizing semantic data, eye tracking data, biosensor input data, and facial analysis from the inputted recorded data (mental process – identifying a plurality of moments of interests may be performed manually by a user observing/analyzing the semantic data, eye tracking data, biosensor input data, and facial analysis data (which is already available, based on the preceding “inputting […] for data analysis” limitation) and accordingly using judgement/evaluation to synthesize/combine the data based on said analysis to produce insights which may enable the user to identify a plurality of moments of interest of the participant’s interaction from the test session)
classifying at least one identified moment of interest as a detracting event (mental process – classifying at least one identified moment of interest as a detracting event may be performed manually by a user observing/analyzing the at least one moment of interest and accordingly using judgement/evaluation to identify and correspondingly classify any occurrence that interrupts the user’s focus/diverts the user’s attention/interaction away from the testing session as a detracting event)
classifying at least one non-identified moment of interest as a moment of interest (mental process – classifying at least one non-identified moment of interest as a moment of interest may be performed manually by a user observing/analyzing moment of interests (both identified and non-identified moments) and accordingly using judgement/evaluation to classify at least one of the non-identified moments of interest as a moment of interest)
identifying which input data is associated with the detracting events (mental process – identifying which input data is associated with detracting events may be performed manually by a user observing/analyzing the input data and detracting events and accordingly using judgement/evaluation to identify which input data is associated with the detracting events)
identifying which input data is associated with the non-identified moments of interest (mental process – identifying which input data is associated with the non-identified moments of interest may be performed manually by a user observing/analyzing the input data and non-identified moments of interest and accordingly using judgement/evaluation to identify which input data is associated with the non-identified moments of interest)
identifying which input data is associated with moments of interest (mental process – identifying which input data is associated with moments of interest may be performed manually by a user observing/analyzing the input data and moments of interest and accordingly using judgement/evaluation to identify which input data is associated with moments of interest)
2A Prong 2: This judicial exception is not integrated into a practical application.
Additional elements:
[…] using artificial intelligence (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: high level recitation of using artificial intelligence to analyze remote user experience and usability test results without significantly more)
[…] through remote testing software (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: high level recitation of using remote testing software without significantly more)
inputting the recorded data from the test session into a central computer for data analysis (Adding insignificant extra-solution activity to the judicial exception – see MPEP 2106.05(g))
training the artificial intelligence by […] (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: high level recitation of training a machine learning model with previously determined data without significantly more)
outputting the recorded data of the participant's interaction with the test session with the identified moments of interest (Adding insignificant extra-solution activity to the judicial exception – see MPEP 2106.05(g))
2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Additional elements:
[…] using artificial intelligence (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: high level recitation of using artificial intelligence to analyze remote user experience and usability test results without significantly more. This cannot provide an inventive concept)
[…] through remote testing software (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: high level recitation of using remote testing software without significantly more. This cannot provide an inventive concept)
inputting the recorded data from the test session into a central computer for data analysis (MPEP 2106.05(d)(II) indicates that merely “Receiving or transmitting data over a network” is a well-understood, routine, conventional function when it is claimed in a merely generic manner (as it is in the present claim). Thereby, a conclusion that the claimed limitation is well-understood, routine, conventional activity is supported under Berkheimer)
training the artificial intelligence by […] (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: high level recitation of training a machine learning model with previously determined data without significantly more. This cannot provide an inventive concept)
outputting the recorded data of the participant's interaction with the test session with the identified moments of interest (MPEP 2106.05(d)(II) indicates that merely “Presenting offers and gathering statistics” is a well-understood, routine, conventional function when it is claimed in a merely generic manner (as it is in the present claim). Thereby, a conclusion that the claimed limitation is well-understood, routine, conventional activity is supported under Berkheimer)
For the reasons above, Claim 1 is rejected as being directed to an abstract idea without significantly more. This rejection applies equally to dependent claims 2-6. The additional limitations of the dependent claims are addressed below.
Regarding Claim 2:
Step 2A Prong 1:
See the rejection of Claim 1 above, which Claim 2 depends on.
Step 2A Prong 2 & Step 2B:
wherein the recorded data is a video recording of the participant's screen during the participant's interaction with the test session, an audiovisual recording of the participant's interaction with the test session, and/or biosensor data from a biosensor device worm by the participant during the test session (Field of Use – limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception does not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application; in this case specifying that the recorded data is a video recording/audiovisual recording/biosensor data does not integrate the exception into a practical application nor amount to significantly more – See MPEP 2106.05(h))
Accordingly, under Step 2A Prong 2 and Step 2B, these additional elements do not integrate the abstract idea into practical application because they do not impose any meaningful limits on practicing the abstract idea, as discussed above in the rejection of claim 1.
Regarding Claim 3:
Step 2A Prong 1:
See the rejection of Claim 2 above, which Claim 3 depends on.
Step 2A Prong 2 & Step 2B:
wherein the recorded data is outputted to a user interface with the video recording of the participant's screen and/or the audiovisual recording of the participant having identified moments of interest timestamped (Field of Use – limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception does not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application; in this case specifying that the recorded data is outputted to a user interface with the video recording/audiovisual recording having identified moments of interest timestamped does not integrate the exception into a practical application nor amount to significantly more – See MPEP 2106.05(h))
Accordingly, under Step 2A Prong 2 and Step 2B, these additional elements do not integrate the abstract idea into practical application because they do not impose any meaningful limits on practicing the abstract idea, as discussed above in the rejection of claim 1.
Regarding Claim 4:
Step 2A Prong 1: See the rejection of Claim 1 above, which Claim 4 depends on.
identifying data sets that are associated with multiple detracting events (mental process – identifying data sets that are associated with multiple detracting events may be performed manually by a user observing/analyzing the data sets and the multiple detracting events and accordingly using judgement/evaluation to identify data sets that are associated with multiple detracting events)
identifying data sets that are associated with multiple non-identified moments of interest (mental process – identifying data sets that are associated with multiple non-identified moments of interest may be performed manually by a user observing/analyzing the data sets and multiple non-identified moments of interest and accordingly using judgement/evaluation to identify data sets that are associated with multiple non-identified moments of interest)
identifying data sets that are associated with multiple moments of interest (mental process – identifying data sets that are associated with multiple moments of interest may be performed manually by a user observing/analyzing the data sets and multiple moments of interest and accordingly using judgement/evaluation to identify data sets that are associated with multiple moments of interest)
Step 2A Prong 2 & Step 2B:
Accordingly, under Step 2A Prong 2 and Step 2B, these additional elements do not integrate the abstract idea into practical application because they do not impose any meaningful limits on practicing the abstract idea, as discussed above in the rejection of claim 1.
Regarding Claim 5:
Step 2A Prong 1: See the rejection of Claim 1 above, which Claim 5 depends on.
identifying data sets that are associated with at least one detracting event (mental process – identifying data sets that are associated with at least one detracting event may be performed manually by a user observing/analyzing the data sets and the at least one detracting event and accordingly using judgement/evaluation to identify data sets that are associated with the at least one detracting event)
identifying data sets that are associated with at least one non-identified moment of interest (mental process – identifying data sets that are associated with at least one non-identified moment of interest may be performed manually by a user observing/analyzing the data sets and the at least one non-identified moment of interest and accordingly using judgement/evaluation to identify data sets that are associated with the at least one non-identified moment of interest)
identifying data sets that are associated with at least one moment of interest (mental process – identifying data sets that are associated with at least one moment of interest may be performed manually by a user observing/analyzing the data sets and the at least one moment of interest and accordingly using judgement/evaluation to identify data sets that are associated with the at least one moment of interest)
Step 2A Prong 2 & Step 2B:
Accordingly, under Step 2A Prong 2 and Step 2B, these additional elements do not integrate the abstract idea into practical application because they do not impose any meaningful limits on practicing the abstract idea, as discussed above in the rejection of claim 1.
Regarding Claim 6:
Step 2A Prong 1: See the rejection of Claim 1 above, which Claim 6 depends on.
identifying at least one moment of interest during the at least one participant's interaction with the test session (mental process – identifying at least one moment of interest during the at least one participant’s interaction with the test session may be performed manually by a user observing/analyzing the participant’s interaction with the test session and accordingly using judgement/evaluation to identify at least one moment of interest based on said analysis of the participant’s interaction with the test session)
identifying, thereafter, at least one further moment of interest (mental process – identifying at least one further moment of interest may be performed manually by a user observing/analyzing the participant’s interaction with the test session and accordingly using judgement/evaluation to identify at least one further moment of interest based on said analysis of the participant’s interaction with the test session)
Step 2A Prong 2 & Step 2B:
wherein, the recorded data is input into the central computer during the at least one participant's interaction with the test session (MPEP 2106.05(d)(II) indicates that merely “Receiving or transmitting data over a network” is a well-understood, routine, conventional function when it is claimed in a merely generic manner (as it is in the present claim). Thereby, a conclusion that the claimed limitation is well-understood, routine, conventional activity is supported under Berkheimer)
training the artificial intelligence during the at least one participant's interaction with the test session (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner's note: high level recitation of training a machine learning model with previously determined data without significantly more. This cannot provide an inventive concept)
Accordingly, under Step 2A Prong 2 and Step 2B, these additional elements do not integrate the abstract idea into practical application because they do not impose any meaningful limits on practicing the abstract idea, as discussed above in the rejection of claim 1.
Claim Rejections - 35 USC § 103
11. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
12. Claims 1-6 are rejected under 35 U.S.C. 103 as being unpatentable over Camus et al. (hereinafter Camus) (US PG-PUB 20220004591), in view of Chen et al. (hereinafter Chen) (US PG-PUB 20180081432), further in view of Yen et al. (hereinafter Yen) (US PG-PUB 20200310842).
Regarding Claim 1, Camus teaches a method for analyzing qualitative remote user experience and usability test results using artificial intelligence (Camus, Abstract, “A method comprises displaying a plurality of items to a user on a first page on a display screen of an electronic device. Each item of the plurality of items displayed on the first page is classified according to whether an item is of interest to the user viewing the display screen. A correlation factor between the user and each item classified as of interest to the user is determined.” & Par. [0014], “One way to ease this burden is to make the system cognizant of a user's specific interests in a personalized and unique way so that the user has a faster and more convenient experience.”, therefore, methods for analyzing qualitative remote user experience and usability results using artificial intelligence (See Par. [0041] for explicit recitation of machine learning models) are disclosed), the method comprising:
selecting at least one participant based on predetermined criteria (Camus, Par. [0015], “According to one embodiment, a user wants to buy an audio surround sound speaker system and navigates to a retailer's website. He starts browsing various speaker systems that are available for purchase and selects a category of speaker systems which displays twenty speaker systems on the first page and there are many more subsequent pages.”, thus, in at least one embodiment, a participant who shows intent to purchase a surround sound speaker system is selected based on the predetermined criteria of a user making an online retail purchase. Preceding Par. [0014] and the remaining portion of Par. [0015] also detail how users who are unable to use their hands due to paralysis, pain, or preoccupation may be ideal participants for using the system of Camus, as the user would not need to use their hands to make computer-based selections – instead, the system of Camus would provide selections/suggestions based on measured user interest. Hence, this may also enable further user/participant selection based on a broad predetermined criteria);
recording data of the at least one participant's interaction with a test session through remote testing software (Camus, Par. [0030], “According to one embodiment, the custom item selection program 112 interfaces with a camera to collect images of the user to detect eye and body movements of the user and conduct facial recognition that may indicate interest of the user in certain items within the list of results. […] additional analysis techniques which are explained in further detail below with respect to FIG. 3 are employed, such as analysis of biometric data collected from a smart device on a user, language processing and sentiment analysis of words spoken by the user that are detected by a microphone […]”, thus, the at least one participant’s interactions with a test session may be recorded through remote testing software (See Figure 1 label 112 which shows that the ‘custom selection program’ used to interface with the user is separate/remote from the user’s client computing device label 102), including interfacing with a camera to collect images and interfacing with a microphone to collect words/spoken language – this interpretation of the recording comprising an audiovisual recording is supported by Applicant’s Claim 2);
inputting the recorded data from the test session into a central computer for data analysis (Camus, Par. [0030], “Once interest is indicated by the user, additional analysis techniques which are explained in further detail below with respect to FIG. 3 are employed, such as analysis of biometric data collected from a smart device on a user, language processing and sentiment analysis of words spoken by the user that are detected by a microphone and analysis of the available online history of the user, including social media posts and profiles, specific desires or requirements entered manually by a user as well as Internet browsing history. The custom item selection program 112 processes these inputs and classifies the items in the first page of results according to user interest.”, therefore, the recorded data from the test session (comprising at least the aforementioned audiovisual recording) is inputted into a central computer (custom selection program label 112 which is hosted by the web server computer label 110) for data analysis);
identifying a plurality of moments of interests of the participant's interaction from the test session by synthesizing semantic data, eye tracking data, biosensor input data, and facial analysis from the inputted recorded data (Camus, Par. [0041], “At 310, each item of the plurality of items displayed on the first page is classified according to whether an item is of interest to the user is based on a machine learning classification model that predicts interest of a user in each of the plurality of items.”, therefore, a plurality of moments of interests of the participant’s interaction from the test session may be identified. This is additionally depicted by Figure 3, which shows how the preceding steps involve collecting and synthesizing semantic data (label 306), gaze analysis/eye tracking (label 302), biometric/biosensor data (label 304), and facial analysis data (label 302), which are used to identify the plurality of moments of interest (label 310))
training the artificial intelligence (Camus, Par. [0041], “The training data for the machine learning algorithms may be collected from a single user. The machine learning models may be used to determine a user's interest in an item in a personal, customized way to account for the individual differences that exist in how people choose. In some embodiments, training data is collected from a group of users. In either case, training data is not collected unless the user consents. The training data may include some or all of the data collected in operations 302, 304, 306, and 308 of process 300.”, thus, training of a machine learning/artificial intelligence model is disclosed) by
classifying at least one identified moment of interest as a detracting event (Camus, Par. [0042], “According to one embodiment, the custom item selection program 112 may include correlation of inputs to user interest module 424 which utilizes supervised machine learning 430 to determine user interest in a list of items based on two calculations: classification of an item into interested and not interested categories 426 and determination of a correlation factor 428, e.g., a Pearson correlation factor.”, therefore, at least one identified moment of interest may be classified as a detracting event (user is not interested – See 35 U.S.C. 112(b) rejection above for Examiner’s BRI of the terms “moment of interest”/“detracting events”))
classifying at least one non-identified moment of interest as a moment of interest (See introduction of Chen reference below for teaching of classifying at least one non-identified moment of interest as a moment of interest),
identifying which input data is associated with the detracting events (Camus, Par. [0038], “Any analysis results, including the time the biometric data was captured, may be stored in the database 114 for assistance in determining positive or negative interest in future items or for use in the interest level calculation. The biometric data analysis results may be used in the classification of each item of the plurality of items displayed on the first page to determine whether a current item is of interest to the user.”, therefore, the biometric data/input data which is associated with detracting events (having a negative interest – See 35 U.S.C. 112(b) rejection above for Examiner’s BRI of the terms “moment of interest”/“detracting events”) may be identified and stored for training the model to determine interest in future items)
identifying which input data is associated with the non-identified moments of interest (See introduction of Chen reference below for teaching of identifying which input data is associated with the non-identified moments of interest), and
identifying which input data is associated with moments of interest (Camus, Par. [0038], “Any analysis results, including the time the biometric data was captured, may be stored in the database 114 for assistance in determining positive or negative interest in future items or for use in the interest level calculation. The biometric data analysis results may be used in the classification of each item of the plurality of items displayed on the first page to determine whether a current item is of interest to the user.”, therefore, the biometric data/input data which is associated with moments of interest (having a positive interest – See 35 U.S.C. 112(b) rejection above for Examiner’s BRI of the terms “moment of interest”/“detracting events”) may be identified and stored for training the model to determine interest in future items),
outputting the recorded data of the participant's interaction with the test session with the identified moments of interest (See introduction of Yen reference below for teaching of outputting the recorded data of the participant’s interaction with the test session with the identified moments of interest).
Camus does not explicitly disclose:
classifying at least one non-identified moment of interest as a moment of interest
identifying which input data is associated with the non-identified moments of interest
However, Chen teaches:
classifying at least one non-identified moment of interest as a moment of interest (Chen, Par. [0047], “For example, the content rich material displayed at 320 may be cached for quick retrieval later. Then, if a false negative reaction is determined at 322, the context selection program 110 a and 110 b may search for cached content rich material at 308 that corresponds to the region that the user is focusing on and then return to 320 to display the cached content rich material again, foregoing the intervening steps.”, therefore, at least one non-identified moment of interest may be classified as a moment of interest, in scenarios where the reaction/interest is determined to be a ‘false negative reaction’ (i.e., a reaction incorrectly identified as negative interest, when instead the reaction indicates positive interest/a moment of interest))
identifying which input data is associated with the non-identified moments of interest (Chen, Par. [0044], “Additionally, the content rich material may be cached for later retrieval in the event of a false negative reaction due to, for example, the user being interrupted by someone or an event around the user that takes the user's focus away from the screen.”, thus, input data (including user interactions comprising interruptions and/or events that take the user’s focus away from the screen), may be associated with non-identified moments of interest (false negative reaction) and may be identified and correspondingly cached for training/re-training the model)
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method for analyzing qualitative remote user experience and usability test results using artificial intelligence per claim 1, as disclosed by Camus to include classifying at least one non-identified moment of interest as a moment of interest and identifying which input data is associated with the non-identified moments of interest, as disclosed by Chen. One of ordinary skill in the art would have been motivated to make this modification to improve model recall and robustness, through the classifying of non-identified moments of interest/false negative reactions and identifying associated input data (Chen, Par. [0047], “For example, the content rich material displayed at 320 may be cached for quick retrieval later. Then, if a false negative reaction is determined at 322, the context selection program 110 a and 110 b may search for cached content rich material at 308 that corresponds to the region that the user is focusing on and then return to 320 to display the cached content rich material again, foregoing the intervening steps.”).
Camus in view of Chen does not explicitly disclose outputting the recorded data of the participant's interaction with the test session with the identified moments of interest
However, Yen teaches outputting the recorded data of the participant's interaction with the test session with the identified moments of interest (Yen, Par. [0126], “The audio and video sentiment analysis system can identify characteristics of the user 614, such as a typical voice tone, posture, face characteristics for a neutral face. The typical characteristics can serve as a baseline when correlating the sentiment. The audio and video sentiment analysis system can bookmark certain time frames and/or portions of the audio stream of the user, video stream of the user, user interface video, and/or other user data corresponding to the identified sentiment.” & Par. [0127], “The audio and video sentiment analysis system can set a very high urgency level based on this analysis and may route the help desk ticket, usability improvement comments, and general product feedback to a member currently available to help. In some embodiments, the audio and video sentiment analysis system can output a probability for a sentiment, based on an output of a neural network that is trained to identify one or more sentiments based on user data.”, therefore, the recorded data of the participant’s interaction with the test session (audiovisual recording) may be outputted with identified/bookmarked moments of interest)
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method for analyzing qualitative remote user experience and usability test results using artificial intelligence, as disclosed by Camus in view of Chen to include outputting the recorded data of the participant's interaction with the test session with the identified moments of interest, as disclosed by Yen. One of ordinary skill in the art would have been motivated to make this modification to correlate the recorded data with identified moments of interest, which may improve accuracy and efficiency in identifying/classifying user interactions with the test session (Yen, Par. [0126], “The audio and video sentiment analysis system can identify characteristics of the user 614, such as a typical voice tone, posture, face characteristics for a neutral face. The typical characteristics can serve as a baseline when correlating the sentiment. The audio and video sentiment analysis system can bookmark certain time frames and/or portions of the audio stream of the user, video stream of the user, user interface video, and/or other user data corresponding to the identified sentiment.”)
Regarding Claim 2, Camus in view of Chen in view of Yen teaches the method according to claim 1, wherein the recorded data is a video recording of the participant's screen during the participant's interaction with the test session, an audiovisual recording of the participant's interaction with the test session, and/or biosensor data from a biosensor device worm by the participant during the test session (Camus, Par. [0030], “According to one embodiment, the custom item selection program 112 interfaces with a camera to collect images of the user to detect eye and body movements of the user and conduct facial recognition that may indicate interest of the user in certain items within the list of results. […] additional analysis techniques which are explained in further detail below with respect to FIG. 3 are employed, such as analysis of biometric data collected from a smart device on a user, language processing and sentiment analysis of words spoken by the user that are detected by a microphone […]”, thus, the recorded data may comprise an audiovisual recording of the participant’s interaction. This is supported by Figure 3, which also depicts that images are collected from a camera and gaze detection (label 302) and spoken words are captured for sentiment analysis (label 306), as well as biometric data (label 304) being collected through wearable electronic devices (See Camus Par. [0038])).
Regarding Claim 3, Camus in view of Chen in view of Yen teaches the method according to claim 2, wherein the recorded data is outputted to a user interface with the video recording of the participant's screen and/or the audiovisual recording of the participant having identified moments of interest timestamped (Yen, Par. [0126], “The audio and video sentiment analysis system can identify characteristics of the user 614, such as a typical voice tone, posture, face characteristics for a neutral face. The typical characteristics can serve as a baseline when correlating the sentiment. The audio and video sentiment analysis system can bookmark certain time frames and/or portions of the audio stream of the user, video stream of the user, user interface video, and/or other user data corresponding to the identified sentiment.” & Par. [0127], “The audio and video sentiment analysis system can set a very high urgency level based on this analysis and may route the help desk ticket, usability improvement comments, and general product feedback to a member currently available to help. In some embodiments, the audio and video sentiment analysis system can output a probability for a sentiment, based on an output of a neural network that is trained to identify one or more sentiments based on user data.”, therefore, the recorded data of the participant’s interaction with the test session (audiovisual recording) may be outputted with identified/bookmarked moments of interest. This similarly supported by Par. [0107], which mentions that the system may sync the recording of the user interface with a captured audio stream using a time stamp and adding indicators of the time on transcribed text).
The reasons of obviousness have been noted in the rejection of Claim 1 above and applicable herein.
Regarding Claim 4, Camus in view of Chen in view of Yen teaches the method according to claim 1, further comprising:
identifying data sets that are associated with multiple detracting events (Camus, Par. [0038], “Any analysis results, including the time the biometric data was captured, may be stored in the database 114 for assistance in determining positive or negative interest in future items or for use in the interest level calculation. The biometric data analysis results may be used in the classification of each item of the plurality of items displayed on the first page to determine whether a current item is of interest to the user.”, therefore, the biometric data/input data which is associated with detracting events (having a negative interest – See 35 U.S.C. 112(b) rejection above for Examiner’s BRI of the terms “moment of interest”/“detracting events”) may be identified and stored for training the model to determine interest in future items); and
identifying data sets that are associated with multiple non-identified moments of interest (Chen, Par. [0044], “Additionally, the content rich material may be cached for later retrieval in the event of a false negative reaction due to, for example, the user being interrupted by someone or an event around the user that takes the user's focus away from the screen.”, thus, input data (including user interactions comprising interruptions and/or events that take the user’s focus away from the screen), may be associated with non-identified moments of interest (false negative reaction) and may be identified and correspondingly cached for training/re-training the model); and
identifying data sets that are associated with multiple moments of interest (Camus, Par. [0038], “Any analysis results, including the time the biometric data was captured, may be stored in the database 114 for assistance in determining positive or negative interest in future items or for use in the interest level calculation. The biometric data analysis results may be used in the classification of each item of the plurality of items displayed on the first page to determine whether a current item is of interest to the user.”, therefore, the biometric data/input data which is associated with moments of interest (having a positive interest – See 35 U.S.C. 112(b) rejection above for Examiner’s BRI of the terms “moment of interest”/“detracting events”) may be identified and stored for training the model to determine interest in future items).
The reasons of obviousness have been noted in the rejection of Claim 1 above and applicable herein.
Regarding Claim 5, Camus in view of Chen in view of Yen teaches the method according to claim 1, further comprising:
identifying data sets that are associated with at least one detracting event (Camus, Par. [0038], “Any analysis results, including the time the biometric data was captured, may be stored in the database 114 for assistance in determining positive or negative interest in future items or for use in the interest level calculation. The biometric data analysis results may be used in the classification of each item of the plurality of items displayed on the first page to determine whether a current item is of interest to the user.”, therefore, the biometric data/input data which is associated with detracting events (having a negative interest – See 35 U.S.C. 112(b) rejection above for Examiner’s BRI of the terms “moment of interest”/“detracting events”) may be identified and stored for training the model to determine interest in future items);
identifying data sets that are associated with at least one non-identified moment of interest(Chen, Par. [0044], “Additionally, the content rich material may be cached for later retrieval in the event of a false negative reaction due to, for example, the user being interrupted by someone or an event around the user that takes the user's focus away from the screen.”, thus, input data (including user interactions comprising interruptions and/or events that take the user’s focus away from the screen), may be associated with non-identified moments of interest (false negative reaction) and may be identified and correspondingly cached for training/re-training the model); and
identifying data sets that are associated with at least one moment of interest (Camus, Par. [0038], “Any analysis results, including the time the biometric data was captured, may be stored in the database 114 for assistance in determining positive or negative interest in future items or for use in the interest level calculation. The biometric data analysis results may be used in the classification of each item of the plurality of items displayed on the first page to determine whether a current item is of interest to the user.”, therefore, the biometric data/input data which is associated with moments of interest (having a positive interest – See 35 U.S.C. 112(b) rejection above for Examiner’s BRI of the terms “moment of interest”/“detracting events”) may be identified and stored for training the model to determine interest in future items).
The reasons of obviousness have been noted in the rejection of Claim 1 above and applicable herein.
Regarding Claim 6, Camus in view of Chen in view of Yen teaches the method according to claim 1, wherein,
the recorded data is input into the central computer during the at least one participant's interaction with the test session (Camus, Par. [0030], “Once interest is indicated by the user, additional analysis techniques which are explained in further detail below with respect to FIG. 3 are employed, such as analysis of biometric data collected from a smart device on a user, language processing and sentiment analysis of words spoken by the user that are detected by a microphone and analysis of the available online history of the user, including social media posts and profiles, specific desires or requirements entered manually by a user as well as Internet browsing history. The custom item selection program 112 processes these inputs and classifies the items in the first page of results according to user interest.”, therefore, the recorded data from the test session (comprising at least the aforementioned audiovisual recording) is inputted into a central computer (custom selection program label 112 which is hosted by the web server computer label 110) for data analysis. This would occur during the participant’s interaction, as shown by Figure 2, since the classification, identification, and corresponding display occurs automatically without requiring human interaction with the service provider (See Par. [0052]));
identifying at least one moment of interest during the at least one participant's interaction with the test session (Camus, Par. [0042], “According to one embodiment, the custom item selection program 112 may include correlation of inputs to user interest module 424 which utilizes supervised machine learning 430 to determine user interest in a list of items based on two calculations: classification of an item into interested and not interested categories 426 and determination of a correlation factor 428, e.g., a Pearson correlation factor.”, therefore, at least one moment of interest may be identified (user is interested – See 35 U.S.C. 112(b) rejection above for Examiner’s BRI of the terms “moment of interest”/“detracting events”) during the participant’s interaction);
training the artificial intelligence during the at least one participant's interaction with the test session (Camus, Par. [0041], “The training data for the machine learning algorithms may be collected from a single user. The machine learning models may be used to determine a user's interest in an item in a personal, customized way to account for the individual differences that exist in how people choose. In some embodiments, training data is collected from a group of users. In either case, training data is not collected unless the user consents. The training data may include some or all of the data collected in operations 302, 304, 306, and 308 of process 300.”, thus, the artificial intelligence may be trained during interactions, if the user consents, in order to iteratively improve the model); and
identifying, thereafter, at least one further moment of interest (Camus, Par. [0041], “The training data may include some or all of the data collected in operations 302, 304, 306, and 308 of process 300. The classification may be utilized to boost the Pearson correlation score that is determined in 206 as to whether a user will be interested in an item and seeing the full item details. The classification results may be stored in the database 114 so that the data is most current, and the output would always be up to date.”, therefore, after training the model using the training data provided by the participant/user, further moments of interest may be classified/identified based on an updated Pearson correlation score resulting from the model training).
Conclusion
13. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Devika S Maharaj whose telephone number is (571)272-0829. The examiner can normally be reached Monday - Thursday 8:30am - 5:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alexey Shmatov can be reached at (571)270-3428. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DEVIKA S MAHARAJ/Examiner, Art Unit 2123