Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Specification
The disclosure is objected to because of the following informalities:
Para. 20, “The techniques may include the techniques described herein may include…” should read “The techniques may include…” or “The techniques described herein may include…”.
Para. 33, “when a user is at a medial location” should read “when a user is at a medical location”
Para. 52, “the spicy noodles the user 202 costumes…” should read “the spicy noodles the user 202 consumes…”.
Appropriate correction is required.
Claim Objections
Claim 2 is objected to because of the following informalities: “determining whether the event corresponding to…” should read “determining whether the event corresponds to…”.
Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., an abstract idea) without integration into a practical application or recitation of significantly more.
In the analysis below, the method of independent claim 1 is considered representative of independent claims 1, 8, and 15 since all of the independent claims recite identical steps despite being directed to different statutory matter. Furthermore, each of independent claims 1, 8, and 15 are directed to one of the four statutory categories of eligible subject matter; thus, the claims pass Step 1 of the Subject Matter Eligibility Test (See flowchart in MPEP 2106).
Step 2A, prong 1 analysis
The independent claims are directed to determining, based at least in part on the derived label, whether the event experienced by the user is a memorable event for the user. The above step can be performed mentally. In particular, a human can keep a diary where they determine memorable events to write down and then record them. Official Notice is taken that keeping a diary consisting of daily or periodic happenings is an old and well known practice; typically a person will think of significant events over the past e.g. day and record them in a book. Any traverse of this official notice must include a plain statement that the signatory does not believe this to be true. As such, the description in independent claims 1, 8, and 15 is an abstract idea – namely, a mental process. Accordingly, the analysis under prong one of step 2A of the Subject Matter Eligibility Test does not result in a conclusion of eligibility (See flowchart in MPEP 2106).
Additional elements
The additional elements recited in each of the independent claims is the step of associating an inferred label describing aspects of the event with each individual data stream, the inferred label indicating at least one of a positive or negative reaction experienced by the user, a description of a visual scene, or a description of environmental conditions received from an external source… determining, using the raw label of each of the multiple individual sensors and inferred labels of the data stream from each of the multiple individual sensors associated with the user during the event, a derived label describing an event experienced by the user. Independent claims 8 and 15 includes the additional elements of processors configured to perform the steps recited in claim 1.
Step 2A, prong 2 analysis
The above-identified additional elements do not integrate the judicial exception into a practical application.
The step of associating an inferred label describing aspects of the event with each individual data stream, the inferred label indicating at least one of a positive or negative reaction experienced by the user, a description of a visual scene, or a description of environmental conditions received from an external source amounts to insignificant pre-solution activity which does not integrate the claimed mental process into a practical application (See MPEP 2106.05(g)). Moreover, it corresponds with the mental steps associated with keeping a diary and writing down what the user saw or experienced.
The step of determining, using the raw label of each of the multiple individual sensors and inferred labels of the data stream from each of the multiple individual sensors associated with the user during the event, a derived label describing an event experienced by the user amounts to insignificant post-solution activity which does not integrate the claimed mental process into a practical application (See MPEP 2106.05(g)).
Each of the other additional elements (circuitry, sensors) amounts to merely using a computer as a tool to perform the claimed mental process. Implementing an abstract idea on a computer does not integrate a judicial exception into a practical application (See MPEP 2106.05(f)).
Moreover, the additional elements of the claims do not recite an improvement in the functioning of a computer or other technology or technical field, the claimed steps are not performed using a particular machine, the claimed steps do not effect a transformation, and the claims do not apply the judicial exception in any meaningful way beyond generically linking the use of the judicial exception to a particular technological environment (See MPEP 2106.04(d)). Therefore, the analysis under prong two of step 2A of the Subject Matter Eligibility Test does not result in a conclusion of eligibility (See flowchart in MPEP 2106).
Step 2B
Finally, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception.
As noted above, the step of associating an inferred label describing aspects of the event with each individual data stream, the inferred label indicating at least one of a positive or negative reaction experienced by the user, a description of a visual scene, or a description of environmental conditions received from an external source amounts to insignificant pre-solution activity. Such insignificant pre-solution activity does not constitute significantly more than the claimed mental process (See MPEP 2106.05(g)).
The step of determining, using the raw label of each of the multiple individual sensors and inferred labels of the data stream from each of the multiple individual sensors associated with the user during the event, a derived label describing an event experienced by the user amounts to insignificant post-solution activity which does not integrate the claimed mental process into a practical application (See MPEP 2106.05(g)).
Each of the other additional elements (circuitry, sensors) are generic computer features which perform generic computer functions that are well-understood, routine, and conventional and do not amount to more than implementing the abstract idea with a computerized system. Thus, taken alone, the additional elements do not amount to significantly more than the above-identified judicial exception (the abstract idea).
Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation, and mere implementation on a generic computer does not add significantly more to the claims. Accordingly, the analysis under step 2B of the Subject Matter Eligibility Test does not result in a conclusion of eligibility (See flowchart in MPEP 2106).
For all of the foregoing reasons, independent claims 1, 8 and 15 do not recite eligible subject matter under 35 USC 101. While the Technical Field (Specification, page 1) states that the method/device would include comparable hallucinated images to generate a prompt for secure access (to a computer system), the present claims do not require any of these elements.
Dependent claims 2-7 are dependent on independent claim 1 and therefore include all of the limitations of claim 1. Dependent claims 9-14 are dependent on independent claim 8 and therefore include all of the limitations of claim 8. Dependent claims 16-20 are dependent on independent claim 15 and therefore include all of the limitations of claim 15. Therefore, claims 2-7, 9-14, and 16-20 recite the same abstract idea of a mental process which can be performed in the mind.
Claims 2, 9, and 16 each recite that determining whether the event corresponding to one or more events having previously been experienced by the user during a predetermined window of time. This feature is a mental process since a person can recollect information corresponding to an event in order to determine if it was previously experienced. The use of a computing system to perform the determination amounts to merely implementing the abstract idea using a computer which neither integrates the mental processing into a practical application nor adds significantly more (See MPEP 2106.05(f)).
Claims 3, 10, and 17 each recite that determining whether the event experienced by the user is a memorable event for the user are executed by a neural network trained to identify memorable events. This feature is a mental process since a person can recall whether an event is memorable or not. The use of a computing system to perform the determination amounts to merely implementing the abstract idea using a computer which neither integrates the mental processing into a practical application nor adds significantly more (See MPEP 2106.05(f)).
Claims 4, 11, and 18 each recite that the inferred label indicates a negative reaction experienced by the user and determining that the event experienced by the user is not a memorable event for the user. This feature merely narrows the scope of the insignificant pre- and post-solution activity which does not integrate the abstract idea into a practical application or add significantly more.
Claims 5 and 12 each recite that determining the derived label describing the event experienced by the user further comprises using data associated with the user received from one or more external sources associated with the user. This feature merely narrows the scope of the insignificant pre- and post-solution activity which does not integrate the abstract idea into a practical application or add significantly more.
Claims 6, 13, and 19 each recite that performing interpolation on one or more individual data streams to fill in gaps in the individual data stream. This merely narrows down the insignificant pre- and post-solution activity which does not integrate the abstract idea into a practical application or add significantly more.
Claims 7, 14, and 20 each recite that the multiple individual sensors associated with the user comprise sensors for tracking physical activity, location, biomarkers, vital signs, environmental factors. This merely narrows down the insignificant pre- and post-solution activity which does not integrate the abstract idea into a practical application or add significantly more.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1, 3, 5, 8, 10, 12, 15, and 17 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by McDorman et al. (US Patent App. Pub. No. 2021/0297422 A1).
Regarding claim 1, McDorman teaches a method for identifying an event as a memorable experience associated with a user, the method comprising: receiving a data stream from each of multiple individual sensors associated with a user during an event, each data stream from each individual sensor having a raw label provided by the individual sensor, each raw label indicating what data from the data stream represents (Para. 14, “Further, the location-based telemetry application may be configured to capture sensor data from sensors of the client device that reside on the client device. The sensor data may be used to generate the location-based telemetry data. The sensor data may include, geolocation (i.e. GPS sensor), weather conditions (i.e. temperature sensor), noise pollution (i.e. microphone), user kinematic movement (i.e. accelerometers), and/or so forth.”); associating an inferred label describing aspects of the event with each individual data stream, the inferred label indicating at least one of a positive or negative reaction experienced by the user, a description of a visual scene, or a description of environmental conditions received from an external source (Para. 12, “In various examples, location-based telemetry data may relate to places of interest and/or events visited by a user over a predetermined time interval. Non-limiting examples may include geolocations visited by the client device, multimedia captured at the visited geolocations, transactions initiated via the client device at visited geolocations, weather conditions at the visited geolocations at the visited point-in-time, events taking place at the visited geolocation at the visited point-in-time, and/or any other information pertinent to a visited geolocation and/or client device”); determining, using the raw label of each of the multiple individual sensors and inferred labels of the data stream from each of the multiple individual sensors associated with the user during the event, a derived label describing an event experienced by the user (Para. 58, “The telemetry data analysis component 422 may use one or more trained machine-learning algorithms to analyze the location-based telemetry data and infer a context associated with the visit. For example, the telemetry data may include a geolocation from a GPS sensor of the client device along with calendar data from a third-party calendar application that resides on the client device. In this example, the telemetry data analysis component 422 may infer that the context of visit relates to a schedule appointment.”); and determining, based at least in part on the derived label, whether the event experienced by the user is a memorable event for the user (Para. 11, “The location-based telemetry data is intended to capture information about a user, that is specific to the user, but at the same time, not traditionally known or captured as part of a user profile.”; Para. 16, “By way of example, an authentication challenge may ask the user to identify a third-party with whom they conducted a voice communication (i.e. phone call) at a geolocation at a particular point in time, an event or landmark visited at the geolocation a particular point in time, or a weather condition or noise pollution experienced at the geolocation a particular point in time.”).
Regarding claim 3, McDorman teaches all of the elements of claim 1, as stated above, as well as wherein the receiving, the associating, the determining a derived label, and the determining whether the event experienced by the user is a memorable event for the user are executed by a neural network trained to identify memorable events (Para. 58, “The telemetry data analysis component 422 may use one or more trained machine-learning algorithms to analyze the location-based telemetry data and infer a context associated with the visit”; Para. 76, “The one or more machine learning algorithms may include but are not limited to algorithms such as… neural networks”).
Regarding claim 5, McDorman teaches all of the elements of claim 1, as stated above, as well as wherein determining the derived label describing the event experienced by the user further comprises using data associated with the user received from one or more external sources associated with the user (Para. 14, “The sensor data may be used to generate the location-based telemetry data. The sensor data may include, geolocation (i.e. GPS sensor), weather conditions (i.e. temperature sensor), noise pollution (i.e. microphone), user kinematic movement (i.e. accelerometers), and/or so forth.”).
Regarding claim 8, the recited system performs variably the same function as the method of claim 1. It is rejected under the same analysis.
Regarding claim 10, the recited elements perform variably the same function as that of claim 3. It is rejected under the same analysis.
Regarding claim 12, the recited elements perform variably the same function as that of claim 5. It is rejected under the same analysis.
Regarding claim 15, the recited non-transitory computer-readable media (Para. 54, “The memory 408 may further include non-transitory computer-readable media”) performs variably the same function as the method of claim 1. It is rejected under the same analysis.
Regarding claim 17, the recited elements perform variably the same function as that of claim 3. It is rejected under the same analysis.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 2, 4, 7, 9, 11, 14, 16, 18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over McDorman.
Regarding claim 2, McDorman teaches all of the elements of claim 1, as stated above, as well as determining whether the event corresponding to one or more events having previously been experienced by the user during a predetermined window of time (Para. 11, “For example, a user may frequent a merchant store (i.e. coffee store) on particular days of the week and at particular times of the day”); and based on the event by the user during the predetermined window of time, determining that the event is a memorable event (Para. 12, “In various examples, location-based telemetry data may relate to places of interest and/or events visited by a user over a predetermined time interval. Non-limiting examples may include geolocations visited by the client device, multimedia captured at the visited geolocations, transactions initiated via the client device at visited geolocations, weather conditions at the visited geolocations at the visited point-in-time, events taking place at the visited geolocation at the visited point-in-time, and/or any other information pertinent to a visited geolocation and/or client device.”).
McDorman does not explicitly disclose determining a memorable event based on the event not corresponding to one or more events having previously been experienced by the user during the predetermined window of time. However, they disclose examples of data related to places of interest, specific events, or multimedia captured at the visited location.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified McDorman to include determining a memorable event based on the event not corresponding to one or more events having previously been experienced by the user during the predetermined window of time. McDorman discloses capturing data related to specific events or places of interests which a user has visited and using those events to accurately authenticate the user. One of ordinary skill in the art would recognize that an event not previously experienced by a user will be more memorable to the user compared to a routine event, leaving it obvious to utilize these more specific events as a means of authentication as the user will have an easier time remembering the circumstances of the event, improving their ability to answer specific authentication questions about the event and avoiding the possibility that the user does not remember information about the event which would be needed for authentication.
Regarding claim 4, McDorman teaches all of the elements of claim 1, as stated above, as well as wherein the inferred label indicates a negative reaction experienced by the user and determining that the event experienced by the user is not a memorable event for the user (Para. 12, “Non-limiting examples may include geolocations visited by the client device, multimedia captured at the visited geolocations, transactions initiated via the client device at visited geolocations, weather conditions at the visited geolocations at the visited point-in-time, events taking place at the visited geolocation at the visited point-in-time, and/or any other information pertinent to a visited geolocation and/or client device”, Although McDorman does not explicitly disclose indicating a negative reaction experienced by the user, multiple non-limiting examples of events experienced by a user are provided, and using any other information pertinent to a visited location is described. One of ordinary skill in the art would understand that in order to authenticate a user by means of specific event experiences, the user needs to recall information about the event. If the event is negative (such as bad weather or a sad event), that information is pertinent to the visited location, and taking that into account would improve the event selection process, increasing the likelihood that the user will remember more information about the event).
Regarding claim 7, McDorman teaches all of the elements of claim 1, as stated above, as well as wherein the multiple individual sensors associated with the user comprise sensors for tracking physical activity, location, biomarkers, vital signs, environmental factors (Para. 14, “The sensor data may include, geolocation (i.e. GPS sensor), weather conditions (i.e. temperature sensor), noise pollution (i.e. microphone), user kinematic movement (i.e. accelerometers), and/or so forth.”, Biomarkers and vital signs are not explicitly disclosed as being used by McDorman. However, they disclose that “The client device(s) 112 may include any sort of electronic device, such as a smartphone, etc.” (Para. 26), and the examples of sensors used are not limiting. One of ordinary skill in the art would realize that using a well-known technology such as a smart watch to capture sensor data related to vital signs would be an obvious improvement, increasing the amount of user data collected in order to more accurately identify and assess experienced events).
Regarding claim 9, the recited elements perform variably the same function as that of claim 2. It is rejected under the same analysis.
Regarding claim 11, the recited elements perform variably the same function as that of claim 4. It is rejected under the same analysis.
Regarding claim 14, the recited elements perform variably the same function as that of claim 7. It is rejected under the same analysis.
Regarding claim 16, the recited elements perform variably the same function as that of claim 2. It is rejected under the same analysis.
Regarding claim 18, the recited elements perform variably the same function as that of claim 4. It is rejected under the same analysis.
Regarding claim 20, the recited elements perform variably the same function as that of claim 7. It is rejected under the same analysis.
Claim(s) 6, 13, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over McDorman in view of Etkin (US Patent Pub. No. 2014/0040653 A1).
Regarding claim 6, McDorman teaches all of the elements of claim 1, as stated above, as well as using multiple sensors.
They do not explicitly disclose performing interpolation on one or more individual data streams to fill in gaps in the individual data streams when an individual sensor associated with collecting the individual data stream has a sampling rate that is lower than one or more other individual sensors.
Etkin teaches performing interpolation on one or more individual data streams to fill in gaps in the individual data stream when an individual sensor associated with collecting the individual data stream has a sampling rate that is lower than one or more other individual sensors of the multiple individual sensors (Figs. 1, 10, Para. 20, “The method includes receiving a sequence of time-stamped data indicative of physical events from each of a plurality of sensors (100), generating an interpolation filter according to desired sampling times (102), and interpolating the sequences of time-stamped data with the generated filter to obtain sequences of data synchronized at desired sampling times (104).”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified McDorman to incorporate the teachings of Etkin to include performing interpolation on one or more individual data streams to fill in gaps in the individual data stream when an individual sensor associated with collecting the individual data stream has a sampling rate that is lower than one or more other individual sensors of the multiple individual sensors. McDorman teaches using multiple different sensors to capture data associated with a user at an event, however they do not mention synchronizing the data captured by these different sensors, leaving possible gaps in the captured data. Etkin teaches to perform interpolation to obtain sequences of data synchronized at desired sampling times. One of ordinary skill in the art would understand that implementing the interpolation techniques of Etkin into the method of McDorman would provide the predictable benefit of more robust sensor data.
Regarding claim 13, the recited elements perform variably the same function as that of claim 6. It is rejected under the same analysis.
Regarding claim 19, the recited elements perform variably the same function as that of claim 6. It is rejected under the same analysis.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Koester, Mark, "AI an a Journaling Companion: How I enhance My creativity and Self-Awareness using ChatGPT, June 27, 2023, downloaded from https://www.markwk.com/ai-as-journaling-companion.html identifies how various streams of data might be integrated to a personal journal at 7 and 11-12.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID A WAMBST whose telephone number is (703)756-1750. The examiner can normally be reached M-F 9-6:30 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Gregory Morse can be reached at (571)272-3838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DAVID ALEXANDER WAMBST/Examiner, Art Unit 2663
/GREGORY A MORSE/Supervisory Patent Examiner, Art Unit 2698