Prosecution Insights
Last updated: April 19, 2026
Application No. 17/925,575

EVENT MODEL TRAINING USING IN SITU DATA

Non-Final OA §101§102§103§112
Filed
Nov 15, 2022
Examiner
LU, HWEI-MIN
Art Unit
2142
Tech Center
2100 — Computer Architecture & Software
Assignee
Lytt Limited
OA Round
1 (Non-Final)
62%
Grant Probability
Moderate
1-2
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
134 granted / 217 resolved
+6.8% vs TC avg
Strong +40% interview lift
Without
With
+39.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
37 currently pending
Career history
254
Total Applications
across all art units

Statute-Specific Performance

§101
11.2%
-28.8% vs TC avg
§103
43.8%
+3.8% vs TC avg
§102
9.4%
-30.6% vs TC avg
§112
33.0%
-7.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 217 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This office action is in responsive to communication(s): original application filed on 11/15/2022, said application claims a priority filing date of 06/18/2020. Claims 1-50 are pending. Claims 1, 8, 33, 38, 43, and 47 are independent. Specification The disclosure is objected to because of the following informalities: in ¶ [0050], "... to determine the presence of absence of an event" appears to be "... to determine the presence or absence of an event"; in ¶ [0122], "… represent a specific determination between the presence of absence of an event at a specific location …" appears to be "… represent a specific determination between the presence or absence of an event at a specific location …" . Appropriate correction is required. The use of the term "WiMAX" in ¶ [0152], which is a trade name or a mark used in commerce, has been noted in this application. The term should be accompanied by the generic terminology; furthermore the term should be capitalized wherever it appears or, where appropriate, include a proper symbol indicating use in commerce such as ™, SM , or ® following the term. Although the use of trade names and marks used in commerce (i.e., trademarks, service marks, certification marks, and collective marks) are permissible in patent applications, the proprietary nature of the marks should be respected and every effort made to prevent their use in any manner which might adversely affect their validity as commercial marks. Claim Objections Claims 3, 15, 17, 27, 30, 32-33, 37-38, 42, 44, 46, 48, and 50 are objected to because of the following informalities: in Claim 3, lines 1-3, "… wherein identifying the one or more events at the location comprises using an identity of the one or more events based on a known event or induced event at the location" appears to be "… wherein identifying the one or more events at the location comprises receiving an identity of the one or more events based on a known event or induced event at the location" according to Claim 20; in Claim 15, lines 2-6, "… wherein training … using the first set of measurements and the identification of the one or more events as inputs comprises: calibrating … using the first set of measurements and the identification of the one or more events as inputs …" appears to be "… wherein training … using the first set of measurements and the identification of the one or more events as inputs comprises: calibrating … using the first set of measurements and the identification of the one or more events as the inputs …" (see also 112(b) rejection to Claim 1); in Claim 17, lines 3-7, "… wherein training … using the third set of measurements and the identification of the at least one additional event as inputs comprises: retaining … using the third set of measurements and the identification of the at least one additional event as inputs" appears to be "… wherein training … using the third set of measurements and the identification of the at least one additional event as the inputs comprises: retaining … using the third set of measurements and the identification of the at least one additional event as the inputs" because "using the third set of measurements and the identification of the at least one additional event as inputs" has been recited in its based claim; in Claim 27, line 1, "The system of claim of claim 18 …" appears to be "The system of claim 18 …"; in Claim 30, lines 2-4, "… train … by calibrating … using the first set of measurements and the identification of the one or more events as inputs" appears to be "… train … by calibrating … using the first set of measurements and the identification of the one or more events as the inputs" because "using the first set of measurements and the identification of the one or more events as inputs" has been recited in its based claim; in Claim 32, lines 4-5, "… calibrating … using the first set of measurements and the identification of the one or more events as inputs" appears to be "… calibrating … using the first set of measurements and the identification of the one or more events as the inputs" because "using the first set of measurements and the identification of the one or more events as inputs" has been recited in its based claim; in Claim 33, line 8; Claim 37, line 1; Claim 38, line 12; and Claim42, lines 1-2, "… the trained one or more event models …" appears to be "… the one or more trained event models …"; in Claims 44 and 48, lines 6-7, "… training one or more second event models of the one of the one or more event models using …" appears to be "… training one or more second event models of the one or more event models using …" because A of B indicating A is a subset of B and "one or more second event models" can be more than one event model, which can never be a subset of "the one of the one or more event models" (i.e., only one event model); in Claims 46 and 50, lines 10 and 12-13, "… the trained one or more first event models …" appears to be "… the one or more trained first event models …"; in Claims 46 and 50, lines 10-11 and 13, "… the retrained one or more first event models …" appears to be "… the one or more retrained first event models …". Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-50 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites the limitation "... training one or more event models using " in line 4. There is insufficient antecedent basis for the limitation "the second set of measurements" in the claim. Clarification is required. According to Claim 18, "... training one or more event models using " may be considered. However, in this case, "… using the first set of measurements and the identification of the one or more events as inputs" in Claims 13 and 15 may need to be changed to "… using the first set of measurements and the identification of the one or more events as the inputs" (see also Claim Objections to Claim 30). Claims 1 and 18 recite the limitation "... identifying events ... identifying/identify one or more events at a location ..." in lines 1-2 and 1-6 respectively, which rendering these claims indefinite because it is unclear whether these two instances of ". Claims 2-17 and 19-32 are rejected for fully incorporating the deficiency of their respective based claims. Claim 2 recites the limitation "... obtaining " in line 2, which rendering the claim indefinite because ". Claims 6, 8-12, and 14 are rejected for fully incorporating the deficiency of their respective based claims. Claim 16 recites the limitation "the second signal" in line 2. There is insufficient antecedent basis for this limitation in the claim. Since a "second signal" is recited in Claim 2 and not in Claim 1, for examination purpose, consider Claim 16 as depending on Claim 2 instead of depending on Claim 1. Claim 17 is rejected for fully incorporating the deficiency of its based claim. Claims 25, 26, and 27 recites the limitation "the second set of measurements" in line . There is insufficient antecedent basis for this limitation in these claims. Since a " second set of measurements " is recited in Claim 19 and not in Claim18, for examination purpose, consider Claims 25, 26, and 27 as depending on Claim 19 instead of depending on Claim 18. Claims 28-29 are rejected for fully incorporating the deficiency of their respective based claims. Claim 29 recites the limitation "the second signal" in line 4. There is insufficient antecedent basis for this limitation in the claim. Since a "second signal" is recited in Claim 19 and not in Claim 18, for examination purpose, consider its based Claim 27 as depending on Claim 19 instead of depending on Claim 18 (see also 112(b) rejections to Claim 27). Claim 31 recites the limitation "the second signal" in line 4. There is insufficient antecedent basis for this limitation in the claim. Since a "second signal" is recited in Claim 19 and not in Claim 18, for examination purpose, consider Claim 31 as depending on Claim 19 instead of depending on Claim 18. Claims 33 and 38 recite the limitation "... identifying events ... identifying/identify one or more events at the location ..." in lines 1-, which rendering these claims indefinite because it is unclear whether these two instances of ". Claims 34-37 and 39-42 are rejected for fully incorporating the deficiency of their respective based claims. Claims 37 and 42 recite the limitation "... to identify the at least one additional event at a second location" in line 3 and lines 3-4 respectively, which rendering these claims indefinite because ". Claims 43 and 47 recite the limitation "... identifying events ... identifying/identify one or more events at one or more locations ..." in lines 1-, which rendering these claims indefinite because it is unclear whether these two instances of ". Claims 44-46 and 48-50 are rejected for fully incorporating the deficiency of their respective based claims. Claims 45 and 49 recite the limitation "... using the second set of measurements from a plurality of locations of the one or more locations and the identification of the one or more events at the plurality of locations as inputs" in lines 3-5, which rendering these claims indefinite because ". Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-50 are rejected under 35 U.S.C. 101 because the claimed invention is directed to abstract idea without significantly more. The claim(s) recite(s) ". This judicial exception is not integrated into a practical application because the claim(s) recite(s) additional elements/limitations of ". The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because (a) the additional limitations/elements of . Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-4, 5-21, and 23-50 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by PRADEEP (US 2019/0220698 A1, pub. date: 07/18/2019), hereinafter PRADEEP. Independent Claims 1 and 18 PRADEEP discloses a method of identifying events (PRADEEP, ¶ [0001]: recognizing the occurrence of events from a wide variety of data sources), the method comprising: identify(ing) one or more events at a location (PRADEEP, ¶¶ [0029]-[0033] with FIG. 1: the first training event detection data 140 configures the first training data collection device 130 to identify events of a first event type (in this example, utterances of a target phrase "YES") based on measurements performed by the first environmental sensor 132; the second training event detection data 170 configures the second training data collection device 160 to identify events of the same first event type (utterances of the target phrase "YES") based on measurements performed by the third environmental sensor 162; transmit training event detection data for identifying different event types to different training data collection devices; ¶¶ [0040]-[0042] with 234 in FIG. 2: receive, from a remote training system, training event detection data (not illustrated in FIG. 2) for identifying events of a first event type (which may be referred to as a "training event type") based on at least environmental data of the first environmental data type, such as the first environmental data 232; Based on the received training event detection data, configure a training event detector 234 (which may be referred to as a "training event detection module") included in the data collection device 200 to identify events of the first event type based on at least environmental data of the first environmental data type; the training event detector 234 may exploit already-existing detection logic (e.g., the training data collection device 200 may already include a configurable speech detection module); the training event detector 234 is configured to apply an ML model specified by the training event detection data; the configured training event detector 234 receives the first environmental data 232, detects instances of the first event type based on at least the first environmental data 232, and produces training event instance data 236 corresponding to the detected event instances; the training event detector 234 may detect multiple concurrent event instances in the first environmental data 232, and generate multiple corresponding items of training event instance data 236; the training event detector 234 may be configured to detect multiple different training events); obtaining/receive a first set of measurements comprising a first signal at the location (PRADEEP, ¶¶ [0029]-[0032] with FIG. 1: a plurality of training data collection devices, including a first training data collection device 130 and a second training data collection device 160; transmit a first training event detection data 140 to the first training data collection device 130, which is at a first location 120 (which may also be referred to as "environment 120") and includes a first environmental sensor 132 and a second environmental sensor 134 that is different than the first environmental sensor 132; similarly, transmit a second training event detection data 170 to the second training data collection device 160, different than the first training data collection device 130, which is at a second location 150 (which may also be referred to as "environment 150") different than the first location 120, and includes a third environmental sensor 162 and a fourth environmental sensor 164 that is different than the third environmental sensor 162; the first and third environmental sensors 132 and 162 are of a same first sensor type (audio sensors), and measurements performed by the first and third environmental sensors 132 and 162 are used to obtain environmental data of a same first environmental data type; the second and fourth environmental sensors 134 and 164 are of a same second sensor type (image and/or video sensors) different than the first sensor type, and measurements performed by the second and fourth environmental sensors 134 and 164 are used to obtain environmental data of a same second environmental data type (images captured at specified frame rates) different than the first environmental data type; ¶ [0026]: a "sensor type" (or "environmental sensor type") refers to a particular modality that an environmental sensor is designed to operate in and/or receive or detect information about; e.g., some broad modalities may include, but are not limited to, audio, light, haptic, flow rate, distance, pressure, motion, chemical, barometric, humidity, and temperature; ¶ [0109] with FIG. 16: the environmental components 1660 may include, e.g., illumination sensors, temperature sensors, humidity sensors, pressure sensors (e.g., a barometer), acoustic sensors (e.g., a microphone used to detect ambient noise), proximity sensors (e.g., infrared sensing of nearby objects), and/or other components that may provide indications, measurements, or signals corresponding to a surrounding physical environment; ¶ [0111] with FIG. 16: the communication components 1664 may include acoustic detectors, e.g., microphones, to identify tagged audio signals; ¶¶ [0036]-[0039] with 230, 232, 250, and 252 in FIG. 2: the training data collection device 200 includes a first environmental sensor 230 (which may be referred to as "first sensor 230") arranged to perform measurements of physical phenomena occurring at a location 210 (which may be referred to as "environment 210" or "scene 210"); the training data collection device 200 obtains first environmental data 232 of a first environmental data type based on one or more of the measurements performed by the first environmental sensor 230; the first environmental data 232 may be provided as samples, each based on one or more measurements performed during a respective point and/or period of time; the first environmental sensor 230 may be configured to periodically generate new samples at a sampling rate; the first environmental data 232 includes data generated based on measurements of physical phenomena performed by the first environmental sensor 230, which may include data provided by the first environmental sensor 230 without modification and/or data resulting from processing data provided by the first environmental sensor 230; the training data collection device 200 also includes a second environmental sensor 250 (which may be referred to as "second sensor 250"), which is also arranged to perform measurements of physical phenomena occurring at the location 210; the training data collection device 200 obtains second environmental data 252 of a second environmental data type based on one or more of the measurements performed by the second environmental sensor 250; the second environmental data type is different than the first environmental data type; the second environmental sensor 250 may be configured to periodically generate new samples at a sampling rate, and/or group the second environmental data 252 into sets of multiple samples for processing; the training data collection device 200 operates the first and second environmental sensors 230 and 250 concurrently, such that new samples of the first environmental data 232 and new samples of the second environmental data 252 are both generated for a period of time); train(ing) one or more event models using the first set of measurements and the identification of the one or more events as inputs (see also 112(b) rejection to Claim 1) (PRADEEP, ¶ [0027]: the term "label" applies to an output value of an event detector that characterizes an event instance; the term "labeling" refers to a process of generating a label for an input data; in different implementations, such labels may be used as target values for corresponding training data items for supervised ML training, with a label indicating one or more desired output values when a trained ML model is applied to the corresponding training data item; the term "label" may also apply to a target value for a training data item in ML training; supervised ML training attempts to, using a collection of training data items and respective labels, infer an ML model that maps the training data items to their labels with reasonable accuracy and also labels unseen data items with reasonable accuracy; the corresponding unsupervised procedure is known as clustering, and involves grouping data into categories based on some measure of inherent similarity or distance; reasonably accurate first labels automatically generated for a first type of environmental data are used to characterize and generate second labels for a second type of environmental data; ¶¶ [0029]-[0033] with FIG. 1: receive the first device-generated training data 142, and may store a portion of the first device-generated training data 142 in a training data repository 112 (which may be referred to as a "training data storage module"); receive the second device-generated training data 172, and may store a portion of the second device-generated training data 172 in the training data repository 112; an ML model trainer 114 (which may be referred to as an "ML model training module") configured to generate a trained ML model 116 from device-generated training data obtained from the training data repository 112; the generation of the ML model may be referred to as "training" or "learning"; the ML model trainer 114 is configured to automatically generate multiple different ML models from the same or similar training data for comparison; ¶¶ [0041]-[0048] with 236, 238, 240, 256, 260, 270, and 272 in FIG. 2: the training event detector 234 further performs labeling of the detected event instances, and data corresponding to one or more resulting labels may be included in the training event instance data 236; the training event detector 234 further generates confidence values associated with the detection of event instances or generation of labels, and data corresponding one or more of the confidence values may be included in the training event instance data 236; the training event detector 234 may be configured to detect multiple different training events, and include corresponding labels in resulting training event instance data 236; a training data generator 238 (which may be referred to as a "training data generation module") configured to receive the training event instance data 236 generated by the training event detector 234 from the first environmental data 232, and selectively generate corresponding device-generated training data 240 based on at least selected portions of the second environmental data 252; the training data generator 238 generates the device-generated training data 240 by automatically selecting portions of the second environmental data 252 (which may be temporarily stored in the environmental data buffer 254) that correspond to respective items of training event instance data 236 (which in tum are identified based on corresponding portions of the first environmental data 232 and corresponding measurements performed by the first environmental sensor 230); the resulting device-generated training data 240 may be delivered to a remote training system, via the device controller interface 220, for use in generating one or more ML models, as described for the trained ML model 116 in FIG. 1; the device-generated training data 240 is used for local training, at the training data collection device 200, of an ML model; the device-generated training data 240 may include additional metadata, such as a location of capture, time and date of capture, hardware details of the first environmental sensor 230 and/or the second environmental sensor 250 (such as make, model, and/or resolution), and/or operating parameters of the first environmental sensor 230 and/or the second environmental sensor 250 ( such as sample rate, exposure time, and/or quality setting); the metadata may be used for automatically selecting appropriate training data items for training a particular ML model; the training data generator 238 is configured to utilize a training data selector 256 (which may be referred to as a "training data selection module") which is configured to automatically select a sub-portion of the second environmental data 252; the selection may be made based on a region of interest (ROI) indicated by the training event instance data 236; e.g., a time, direction, position, area (including, e.g., a bounding box or a bitmap), and/or volume indicated by the training event instance data 236; the training data generator 238 is configured to apply various criteria to select training event instances for which corresponding device-generated training data 240 is generated; one such criteria may be a similarity of training data provided by the training data selector 256 for a current training event instance to training data for a previous training event instance; e.g. techniques such as fingerprinting, hashing, local feature analysis, or sum of squared difference may be used to determine the training data for the current event instance is too similar to previous training data to be likely to meaningfully improve upon previous device-generated training data 240; another such criteria is whether the training event instance data 236 indicates a low confidence value (such as below a threshold confidence value), with such training event instance data items not resulting in corresponding device-generated training data 240; the training data collection device 200 may be configured for testing an ML model 272, such as an ML model trained using device-generated training data such as the device-generated training data 240, and further include an ML model testing controller 260 (which may be referred to an "ML model testing control module") and an ML event detector 270 (which may be referred to an ML event detection module") for that purpose; the ML model testing controller 260 receives the ML model 272 from a remote system, and configures the ML event detector 270 to apply the received ML model 272 to detect events of a second event type (which may be the same as, or different than, the first event type) based on the second environmental data 252, and generate corresponding event instance data 274; the ML event detector 270 is applied to the second environmental data 252, resulting in items of event instance data 274, and the training event detector 234 is applied to the first environmental data 232, resulting in items of training event instance data 236; the ML model testing controller 260 is configured to compare the items of event instance data 274 against corresponding items of training event instance data 236 (for example, items generated based on measurements performed at approximately a same time); the ML model testing controller 260 records a testing result based on the comparison, and testing results recorded over a period of time are transmitted to a remote system for evaluation, where the comparison determines there is a substantial difference, a corresponding item of device-generated training data 240 is generated, much as previously described, as it may be useful for further generalization of (or, in some cases, specialization of) later ML models); and using/use the one or more event models to identify at least one additional event at one or more locations (PRADEEP, ¶ [0035] with FIG. 1: application of the trained ML model 116 by a trained system 190 at a third location 180 (which may be referred to as "environment 180"); the trained system 190 is also configured to use the trained ML model 116 as an ML model 192 for detecting events; the trained system 190 detects an event instance 198 by applying the ML model 192 to the captured image frames; in response to the detection of the event instance 198, the trained system 190 performs additional responsive actions; ¶ [0049] with 270, 272, 274, and 276 in FIG. 2: apply an ML model 272, such as an ML model trained using device-generated training data 240, for identifying event instances for a third event type (which may be the same as, or different than, the first event type) in a non-testing configuration; the training data collection device 200 includes the ML event detector 270, which generates event instance data 274 which is provided to an event processor 276 (which may be referred to an "even processing module"), such as an application software program, configured to respond to events of the third event type based on received event instance data 274). PRADEEP further discloses a system (PRADEEP, ¶ [0029]: 100 in FIG. 1: system/architecture 100; ¶¶ [0097] and [0104] with 1504 in FIG. 15 and 1600 in FIG. 16: the software architecture 1502 may execute on hardware such as a machine 1600 of FIG. 16 that includes, among other things, processors 1610, memory 1630, and input/output (I/O) components 1650; a representative hardware layer 1504 is illustrated and can represent, for example, the machine 1600 of FIG. 16; the example machine 1600 is in a form of a computer system) for identifying events (PRADEEP, ¶ [0001]: recognizing the occurrence of events from a wide variety of data sources), the system comprising: a memory (PRADEEP, ¶¶ [0097] and [0105]-[0107] with 1510 in FIG.15 and 1630 in FIG. 16: a memory/storage 1510; memory 1630); an identification program stored in the memory (PRADEEP, ¶¶ [0097] and [0106]-[0107] with 1508 in FIG. 15 and 1616 in FIG.16: the executable instructions 1508 represent executable instructions of the software architecture 1502, including implementation of the methods, modules and so forth; instructions 1508 held by processing unit 1508 may be portions of instructions 1508 held by the memory/storage 1510; the storage unit 1636 and memory 1632, 1634 store instructions 1616 embodying any one or more of the functions); and a processor (PRADEEP, ¶¶ [0097] and [0105] with 1506 in FIG. 15 and 1610 in FIG. 16: a processing unit 1506; processors 1610), wherein the identification program, when executed on the processor, configures the processor to perform the method described above (PRADEEP, ¶¶ [0104]-[0107] with FIG.6: an example machine 1600 configured to read instructions from a machine-readable medium (e.g., a machine readable storage medium) and perform any of the features described herein; one or more processors 1612a to 1612n may execute the instructions 1616 and process data; one or more processors 1610 may execute instructions provided or identified by one or more other processors 1610; the instructions 1616 may also reside, completely or partially, within the memory 1632, 1634, within the storage unit 1636, within at least one of the processors 1610 (e.g., within a command buffer or cache memory), within memory at least one of I/O components 1650, or any suitable combination thereof, during execution thereof; the instructions, when executed by one or more processors 1610 of the machine 1600, cause the machine 1600 to perform and one or more of the features described herein). Claims 2 and 19 PRADEEP discloses all the elements as stated in Claims 1 and 18 respectively and further discloses obtaining/receive a second set of measurements comprising a second signal at the location (PRADEEP, ¶¶ [0029]-[0032] with FIG. 1: a plurality of training data collection devices, including a first training data collection device 130 and a second training data collection device 160; transmit a first training event detection data 140 to the first training data collection device 130, which is at a first location 120 (which may also be referred to as "environment 120") and includes a first environmental sensor 132 and a second environmental sensor 134 hat is different than the first environmental sensor 132; similarly, transmit a second training event detection data 170 to the second training data collection device 160, different than the first training data collection device 130, which is at a second location 150 (which may also be referred to as "environment 150") different than the first location 120, and includes a third environmental sensor 162 and a fourth environmental sensor 164 that is different than the third environmental sensor 162; the first and third environmental sensors 132 and 162 are of a same first sensor type (audio sensors), and measurements performed by the first and third environmental sensors 132 and 162 are used to obtain environmental data of a same first environmental data type; the second and fourth environmental sensors 134 and 164 are of a same second sensor type (image and/or video sensors) different than the first sensor type, and measurements performed by the second and fourth environmental sensors 134 and 164 are used to obtain environmental data of a same second environmental data type (images captured at specified frame rates) different than the first environmental data type; ¶ [0109] with FIG. 16: the environmental components 1660 may include, e.g., illumination sensors, temperature sensors, humidity sensors, pressure sensors (e.g., a barometer), acoustic sensors (e.g., a microphone used to detect ambient noise), proximity sensors (e.g., infrared sensing of nearby objects), and/or other components that may provide indications, measurements, or signals corresponding to a surrounding physical environment; ¶ [0111] with FIG. 16: the communication components 1664 may include acoustic detectors, e.g., microphones, to identify tagged audio signals; ¶¶ [0036]-[0039] with 230, 232, 250, and 252 in FIG. 2: the training data collection device 200 includes a first environmental sensor 230 (which may be referred to as "first sensor 230") arranged to perform measurements of physical phenomena occurring at a location 210 (which may be referred to as "environment 210" or "scene 210"); the training data collection device 200 obtains first environmental data 232 of a first environmental data type based on one or more of the measurements performed by the first environmental sensor 230; the first environmental data 232 may be provided as samples, each based on one or more measurements performed during a respective point and/or period of time; the first environmental sensor 230 may be configured to periodically generate new samples at a sampling rate; the first environmental data 232 includes data generated based on measurements of physical phenomena performed by the first environmental sensor 230, which may include data provided by the first environmental sensor 230 without modification and/or data resulting from processing data provided by the first environmental sensor 230; the training data collection device 200 also includes a second environmental sensor 250 (which may be referred to as "second sensor 250"), which is also arranged to perform measurements of physical phenomena occurring at the location 210; the training data collection device 200 obtains second environmental data 252 of a second environmental data type based on one or more of the measurements performed by the second environmental sensor 250; the second environmental data type is different than the first environmental data type; the second environmental sensor 250 may be configured to periodically generate new samples at a sampling rate, and/or group the second environmental data 252 into sets of multiple samples for processing; the training data collection device 200 operates the first and second environmental sensors 230 and 250 concurrently, such that new samples of the first environmental data 232 and new samples of the second environmental data 252 are both generated for a period of time), wherein identifying/the identification of the one or more events at the location comprises identifying/the identification of the one or more events at the location using/based on the second set of measurements (PRADEEP, ¶¶ [0029]-[0033] with FIG. 1: the first training event detection data 140 configures the first training data collection device 130 to identify events of a first event type (in this example, utterances of a target phrase "YES") based on measurements performed by the first environmental sensor 132; the second training event detection data 170 configures the second training data collection device 160 to identify events of the same first event type (utterances of the target phrase "YES") based on measurements performed by the third environmental sensor 162; transmit training event detection data for identifying different event types to different training data collection devices; ¶¶ [0040]-[0042] with 234 in FIG. 2: receive, from a remote training system, training event detection data (not illustrated in FIG. 2) for identifying events of a first event type (which may be referred to as a "training event type") based on at least environmental data of the first environmental data type, such as the first environmental data 232; Based on the received training event detection data, configure a training event detector 234 (which may be referred to as a "training event detection module") included in the data collection device 200 to identify events of the first event type based on at least environmental data of the first environmental data type; the training event detector 234 may exploit already-existing detection logic (e.g., the training data collection device 200 may already include a configurable speech detection module); the training event detector 234 is configured to apply an ML model specified by the training event detection data; the configured training event detector 234 receives the first environmental data 232, detects instances of the first event type based on at least the first environmental data 232, and produces training event instance data 236 corresponding to the detected event instances; the training event detector 234 may detect multiple concurrent event instances in the first environmental data 232, and generate multiple corresponding items of training event instance data 236; the training event detector 234 may be configured to detect multiple different training events), and wherein the first signal and the second signal represent different physical measurements (PRADEEP, ¶¶ [0029]-[0032] with FIG. 1: a plurality of training data collection devices, including a first training data collection device 130 and a second training data collection device 160; transmit a first training event detection data 140 to the first training data collection device 130, which is at a first location 120 (which may also be referred to as "environment 120") and includes a first environmental sensor 132 and a second environmental sensor 134 hat is different than the first environmental sensor 132; similarly, transmit a second training event detection data 170 to the second training data collection device 160, different than the first training data collection device 130, which is at a second location 150 (which may also be referred to as "environment 150") different than the first location 120, and includes a third environmental sensor 162 and a fourth environmental sensor 164 that is different than the third environmental sensor 162; the first and third environmental sensors 132 and 162 are of a same first sensor type (audio sensors), and measurements performed by the first and third environmental sensors 132 and 162 are used to obtain environmental data of a same first environmental data type; the second and fourth environmental sensors 134 and 164 are of a same second sensor type (image and/or video sensors) different than the first sensor type, and measurements performed by the second and fourth environmental sensors 134 and 164 are used to obtain environmental data of a same second environmental data type (images captured at specified frame rates) different than the first environmental data type; ¶ [0109] with FIG. 16: the environmental components 1660 may include, e.g., illumination sensors, temperature sensors, humidity sensors, pressure sensors (e.g., a barometer), acoustic sensors (e.g., a microphone used to detect ambient noise), proximity sensors (e.g., infrared sensing of nearby objects), and/or other components that may provide indications, measurements, or signals corresponding to a surrounding physical environment; ¶ [0111] with FIG. 16: the communication components 1664 may include acoustic detectors, e.g., microphones, to identify tagged audio signals; ¶¶ [0036]-[0039] with 230, 232, 250, and 252 in FIG. 2: the training data collection device 200 includes a first environmental sensor 230 (which may be referred to as "first sensor 230") arranged to perform measurements of physical phenomena occurring at a location 210 (which may be referred to as "environment 210" or "scene 210"); the training data collection device 200 obtains first environmental data 232 of a first environmental data type based on one or more of the measurements performed by the first environmental sensor 230; the first environmental data 232 may be provided as samples, each based on one or more measurements performed during a respective point and/or period of time; the first environmental sensor 230 may be configured to periodically generate new samples at a sampling rate; the first environmental data 232 includes data generated based on measurements of physical phenomena performed by the first environmental sensor 230, which may include data provided by the first environmental sensor 230 without modification and/or data resulting from processing data provided by the first environmental sensor 230; the training data collection device 200 also includes a second environmental sensor 250 (which may be referred to as "second sensor 250"), which is also arranged to perform measurements of physical phenomena occurring at the location 210; the training data collection device 200 obtains second environmental data 252 of a second environmental data type based on one or more of the measurements performed by the second environmental sensor 250; the second environmental data type is different than the first environmental data type; the second environmental sensor 250 may be configured to periodically generate new samples at a sampling rate, and/or group the second environmental data 252 into sets of multiple samples for processing; the training data collection device 200 operates the first and second environmental sensors 230 and 250 concurrently, such that new samples of the first environmental data 232 and new samples of the second environmental data 252 are both generated for a period of time). Claims 3 and 20 PRADEEP discloses all the elements as stated in Claims 1 and 18 respectively and further discloses wherein identifying/the identification of the one or more events at the location comprises using/receiving an identity of the one or more events based on a known event or induced event at the location (PRADEEP, ¶¶ [0029]-[0033] with FIG. 1: the first training event detection data 140 configures the first training data collection device 130 to identify events of a first event type (in this example, utterances of a target phrase "YES") based on measurements performed by the first environmental sensor 132; the second training event detection data 170 configures the second training data collection device 160 to identify events of the same first event type (utterances of the target phrase "YES") based on measurements performed by the third environmental sensor 162; transmit training event detection data for identifying different event types to different training data collection devices; ¶¶ [0040]-[0042] with 234 in FIG. 2: receive, from a remote training system, training event detection data (not illustrated in FIG. 2) for identifying events of a first event type (which may be referred to as a "training event type") based on at least environmental data of the first environmental data type, such as the first environmental data 232; Based on the received training event detection data, configure a training event detector 234 (which may be referred to as a "training event detection module") included in the data collection device 200 to identify events of the first event type based on at least environmental data of the first environmental data type; the training event detector 234 may exploit already-existing detection logic (e.g., the training data collection device 200 may already include a configurable speech detection module); the training event detector 234 is configured to apply an ML model specified by the training event detection data; the configured training event detector 234 receives the first environmental data 232, detects instances of the first event type based on at least the first environmental data 232, and produces training event instance data 236 corresponding to the detected event instances; the training event detector 234 further performs labeling of the detected event instances, and data corresponding to one or more resulting labels may be included in the training event instance data 236; the training event detector 234 further generates confidence values associated with the detection of event instances or generation of labels, and data corresponding one or more of the confidence values may be included in the training event instance data 236; the training event detector 234 may detect multiple concurrent event instances in the first environmental data 232, and generate multiple corresponding items of training event instance data 236; the training event detector 234 may be configured to detect multiple different training events, and include corresponding labels in resulting training event instance data 236; ¶ [0046]: techniques such as fingerprinting, hashing, local feature analysis, or sum of squared difference may be used to determine the training data for the current event instance is too similar to previous training data to be likely to meaningfully improve upon previous device-generated training data 240). Claims 4 and 21 PRADEEP discloses all the elements as stated in Claims 1 and 18 respectively and further discloses wherein the first set of measurements comprises acoustic measurements obtained at the location (PRADEEP, ¶ [0109] with FIG. 16: the environmental components 1660 may include, e.g., illumination sensors, temperature sensors, humidity sensors, pressure sensors (e.g., a barometer), acoustic sensors (e.g., a microphone used to detect ambient noise), proximity sensors (e.g., infrared sensing of nearby objects), and/or other components that may provide indications, measurements, or signals corresponding to a surrounding physical environment; ¶ [0111] with FIG. 16: the communication components 1664 may include acoustic detectors, e.g., microphones, to identify tagged audio signals; ¶ [0079] with 1032 in FIG. 10: during a first period of time twelve seconds in duration, audio data 1032 (which may be referred to as "seventh environmental data 1032" of a seventh environmental data type) is obtained based on measurements performed by the seventh environmental sensor 1030 at a first sampling rate; ¶ [0084] with 1220 in FIG. 12: a fourth training data collection device 1210 includes a tenth environmental sensor 1212 configured to capture audio data1220; during a fifth period of time four seconds in duration, audio data 1220 (which may be referred to as "tenth environmental data 1220" of a tenth environmental data type) is obtained based on measurements performed by the tenth environmental sensor 1212 at a fourth sampling rate). Claims 6 and 23 PRADEEP discloses all the elements as stated in Claims 2 and 18 respectively and further discloses wherein the second/first set of measurements comprise/are received from at least one of a temperature sensor (measurement), a flow meter (measurement), a pressure sensor (measurement), a strain sensor (measurement), a position sensor (measurement), a current meter (measurement), a level sensor (measurement), a phase sensor (measurement), a composition sensor (measurement), an optical sensor (measurement), an image sensor (measurement), or any combination thereof (PRADEEP, ¶ [0026]: a "sensor type" (or "environmental sensor type") refers to a particular modality that an environmental sensor is designed to operate in and/or receive or detect information about; e.g., some broad modalities may include, but are not limited to, audio, light, haptic, flow rate, distance, pressure, motion, chemical, barometric, humidity, and temperature; ¶ [0109] with FIG. 16: the environmental components 1660 may include, e.g., illumination sensors, temperature sensors, humidity sensors, pressure sensors (e.g., a barometer), acoustic sensors (e.g., a microphone used to detect ambient noise), proximity sensors (e.g., infrared sensing of nearby objects), and/or other components that may provide indications, measurements, or signals corresponding to a surrounding physical environment). Claims 7 and 24 PRADEEP discloses all the elements as stated in Claims 1 and 18 respectively and further discloses creating labeled data using the identified one or more events and the first set of measurements (PRADEEP, ¶ [0027]: the term "label" applies to an output value of an event detector that characterizes an event instance; the term "labeling" refers to a process of generating a label for an input data; in different implementations, such labels may be used as target values for corresponding training data items for supervised ML training, with a label indicating one or more desired output values when a trained ML model is applied to the corresponding training data item; the term "label" may also apply to a target value for a training data item in ML training; supervised ML training attempts to, using a collection of training data items and respective labels, infer an ML model that maps the training data items to their labels with reasonable accuracy and also labels unseen data items with reasonable accuracy; the corresponding unsupervised procedure is known as clustering, and involves grouping data into categories based
Read full office action

Prosecution Timeline

Nov 15, 2022
Application Filed
Sep 13, 2025
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602578
LIGHT SOURCE COLOR COORDINATE ESTIMATION SYSTEM AND DEEP LEARNING METHOD THEREOF
2y 5m to grant Granted Apr 14, 2026
Patent 12596954
MACHINE LEARNING FOR MANAGEMENT OF POSITIONING TECHNIQUES AND RADIO FREQUENCY USAGE
2y 5m to grant Granted Apr 07, 2026
Patent 12591770
PREDICTING A STATE OF A COMPUTER-CONTROLLED ENTITY
2y 5m to grant Granted Mar 31, 2026
Patent 12579466
DYNAMIC USER-INTERFACE COMPARISON BETWEEN MACHINE LEARNING OUTPUT AND TRAINING DATA
2y 5m to grant Granted Mar 17, 2026
Patent 12561222
REDUCING BIAS IN MACHINE LEARNING MODELS UTILIZING A FAIRNESS DEVIATION CONSTRAINT AND DECISION MATRIX
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
62%
Grant Probability
99%
With Interview (+39.5%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 217 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month