Prosecution Insights
Last updated: April 19, 2026
Application No. 17/408,142

APPARATUSES, COMPUTER-IMPLEMENTED METHODS, AND COMPUTER PROGRAM PRODUCTS FOR CONTINUOUS PERCEPTION DATA LEARNING

Final Rejection §103
Filed
Aug 20, 2021
Examiner
HAN, BYUNGKWON
Art Unit
2121
Tech Center
2100 — Computer Architecture & Software
Assignee
Intelligrated Headquarters LLC
OA Round
2 (Final)
0%
Grant Probability
At Risk
3-4
OA Rounds
3y 3m
To Grant
0%
With Interview

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 1 resolved
-55.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
28 currently pending
Career history
29
Total Applications
across all art units

Statute-Specific Performance

§101
34.7%
-5.3% vs TC avg
§103
44.0%
+4.0% vs TC avg
§102
2.0%
-38.0% vs TC avg
§112
19.3%
-20.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims Claims 1-2, 7-8, 10-14, and 16-20 are amended. Claims 6, 15 are canceled. Claims 1-5, 7-14, 16-20 are pending and are examined herein. Claims 1-5, 7-14, 16-20 are rejected under 35 U.S.C. 103. Response to Amendment The amendment filed December 9th, 2025 has been entered. Claims 1-2, 7-8, 10-14, and 16-20 were amended. Claims 6, 15 were canceled. Claims 1-5, 7-14, 16-20 are pending and are examined herein. Applicant’s amendments to the claims have overcome each and every objections previously set forth in the Non-Final Rejection Office Action mailed September 9th, 2025. Response to Arguments Applicant’s arguments, see Pgs 8 - 13, filed December 9, with respect to the rejection(s) of claim(s) 1-5, 7-14, 16-20 under 102 and 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Yang et al. (US 11294383 B2) introduced in the below 35 U.S.C. 103 rejection to teach the amended features in combination. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1, 3, 9 – 10, 12, 18 – 20 are rejected under 35 U.S.C. 103 as being unpatentable over Liu et al. (NPL: “Lifelong Federated Reinforcement Learning: A Learning Architecture for Navigation in Cloud Robotic Systems”) in view of Yang et al. (U.S. Pub. 11294383 B2). Regarding Claim 1, Liu teach that receive an environment perception data set associated with one or more real-time sensors; (Pg. 4560 IV. Experiments section of Liu states “In every environment, a Turtlebot3 equipped with a laser range sensor is used as the robot platform. The scanning range is from 0.13 m to 4 m. The target is represented by a red square object.” It represents that the computing device, Turtlebot3, is receiving environment perception data through laser range sensor in real-time.) train an updated individual model based at least in part on the environment perception data set; (Pg. 4557 Fig. 2 Proposed Architecture section of Liu states “In robot→ Environment, the robot learns to avoid some new types of obstacles in the new environment through reinforcement learning and obtains the private Q-network model. Not only from one robot training in different environments, private models can also be resulted from multiple robots. It is a type of federated learning” Below image is the Figure 2. PNG media_image1.png 370 822 media_image1.png Greyscale ) transmit, to a central learning system via at least one high-throughput communication network, to cause the real-time data central learning system to update a central model based at least in part on the updated individual model. (Pg. 4557 Fig. 2 Proposed Architecture of Liu states “After that, the private network will be uploaded to the cloud. The cloud server evolves the shared model by fusing private models to the shared model. We input the output of the shared model as added features to the Q-network in reinforcement learning, or simply transfer all parameters to the Q-network. Iterating this step, models on the cloud become increasingly powerful.” Central model is updated based on the private models through cloud server iteratively as private models learn new information through one or more robots. ) However, Liu does not explicitly teach that generate error data objects in response to failed operations or inaccurate results produced by the updated individual model; and transmit, the error data object to a real-time data central learning system to cause the real-time data central learning system to update a central model based at least in part on a plurality of generated error data objects Yang teaches that generate error data objects in response to failed operations or inaccurate results produced by the updated individual model; and (Column 6 lines 22 – 25, 30 – 39 of Yang states “In this case, the data transformation unit 150 of the self-learning robot 100 may transform the data received by the data reception unit 110 into data to be transmitted to the user equipment or the server… In addition, the data transformation unit 150 may transmit information related to recognition failure including the environment in which the self-learning robot 100 recognizes an object, such as information on the distance from the object and temperature and position information of the object to the user equipment or the server. Accordingly, the data transformation unit 150 may transmit sensing video data or audio data, from which personal information has been deleted, data related to recognition failure, etc.” Column 7 lines 48 – 58 of Yang states “As shown in FIG. 2, the self-learning robot 100 according to the embodiment of the present invention may recognize a specific object and receive recognition data through the data reception unit 110. In addition, the data recognition unit 120 may match the data received from the data reception unit 110 to data included in the database in the self-learning robot 100. The result output unit 130 may output the matching result of the data recognition unit 120. In addition, the recognition result verification unit 140 may determine accuracy of the matching result of the result output unit 130.” Column 10 lines 1 – 18 of Yang states “As shown in FIG. 7, the self-learning robot 100 may move within the home. At this time, the self-learning robot 100 may find a pottery 700, retrieve data having a similar image from the database, and recognize the pottery 700 as a trash can 705. Accordingly, the self-learning robot 100 may perform an object-coping behavior of collecting and putting the trash in the pottery 700. In this case, when a user inputs negative feedback 710 to the object-coping behavior of the self-learning robot 100, the self-learning robot 100 may define the behavior of putting the trash in the pottery as recognition failure. Accordingly, the self-learning robot 100 may transmit the captured image of the pottery 700 to the server 200 to request database update for self-learning. The server 200 may generate and transmit information on the pottery 700 and data on an object-coping behavior corresponding to the pottery 700 to the self-learning robot 100, upon determining that the database is able to be updated only using the image captured by the self-learning robot 100.” When recognition failure is determined, the self-learning robot in Yang generates recognition data to be sent to the server and update the database corresponding to the failure. As self-learning robot iteratively learns through the process, it would stay updated and continue the process. ) transmit, the error data object to a real-time data central learning system to cause the real-time data central learning system to update a central model based at least in part on a plurality of generated error data objects (Column 7 lines 63 – Column 8 lines 13 of Yang states “In contrast, when the recognition result verification unit 140 determines that accuracy of the matching result is less than the predetermined level, the data transformation unit 150 transforms the recognition data according to a predetermined algorithm and the server communication unit 165 may transmit the transformed data to the server 200. The recognition model reception unit 180 of the self-learning robot 100 may receive the object information and the data on the object-coping behavior from the recognition model transmission unit 270 of the server 200. In addition, the recognition model updating unit 190 of the self-learning robot 100 may update the database of the self-learning robot 100 using the data received from the server 200. FIG. 3 is a diagram illustrating an example in which a server according to an embodiment of the present invention receives recognition failure data from a plurality of self-learning robots.” ) It would have been obvious to one with ordinary skill in the art before the effective filing date of the invention to combine the teachings from Liu with Yang because both references are directed to improving learning in cloud robotic systems using robot obtained environment data. Yang teaches that a robot uses sensor data and determines when its recognition result is inaccurate or results in a failed operation, sends the resulting failure related data to a server, and the server updates learning information using data accumulated from a plurality of self-learning robots. Liu teaches a cloud robotic learning architecture in which robots learn in their own environments through reinforcement learning, and their learned knowledge is fused in the cloud by a knowledge fusion algorithm for upgrading a shared model deployed on the cloud so that prior experience can be transferred and reused by other robots. One with the ordinary skill in the art would have been motivated to incorporate the teachings of Yang into the Liu in order to improved adaption of learning from failure operations in various environments, allow knowledge learned from one robot’s failure or inaccurate result to benefit all robots, and improve overall robot learning efficiency and performance in a predictable manner. Regarding Claim 3, the rejection of claim 1 is incorporated herein. Furthermore, the combination of Liu and Yang teaches receive, from the real-time data central learning system, an updated central model trained based at least in part on the updated individual model and a plurality of other updated individual models associated with a plurality of other computing devices; (Pg. 4557 A. Procedure of LFRL section of Liu states “In LFRL, Robot 2 and Robot 3 download the shared model 1G as the initial actor model in reinforcement learning. Then they can get their private networks Q2 and Q3 through reinforcement learning in Environment 2 and Environment 3. After completing the training, LFRL uploads Q2 and Q3 to the cloud. In the cloud, strategy models Q2 and Q3 will be fused into shared model 1G, and then shared model 2G will be generated.”) and replace the updated individual model with the updated central model. (Pg. 4557 A. Procedure of LFRL section of Liu states “In the future, the shared model 2G can be used by other cloud robots. Other robots will also upload their private strategy models to the cloud server to promote the evolution of the shared model.” Also, Pg. 4561 B. Evaluation for the Architecture of Liu further states “The cloud server fused the private model and the shared model 1G to obstain the shared model 2G. With the same mode, follow-up evolutions would be performed.”) Regarding Claim 9, the rejection of claim 1 is incorporated herein. Furthermore, the combination of Liu and Yang teaches the updated individual model embodies a reinforcement learning model. (Pg. 4557 A. Procedure of LFRL section of Liu states “This section displays a practical example of LFRL: there are 4 robots, 3 different environments and cloud servers. The first robot obtains its private strategy model Q1 through reinforcement learning in Environment 1 and upload it to the cloud server as the shared model 1G. After a while, Robot 2 and Robot 3 desire to learn navigation by reinforcement learning in Environment 2 and Environment 3. In LFRL, Robot 2 and Robot 3 download the shared model 1G as the initial actor model in reinforcement learning. Then they can get their private networks Q2 and Q3 through reinforcement learning in Environment 2 and Environment 3.”) Claims 10, 19 recite substantially similar subject matter as claim 1 respectively, and are rejected with the same rationale, mutatis mutandis. Regarding claims 12, 18, the rejection of claim 10 is incorporated herein. Claims 12, 18 recite substantially similar subject matter as claims 3, 9 respectively, and are rejected with the same rationale, mutatis mutandis. Regarding claim 20, the rejection of claim 19 is incorporated herein. Claim 20 recites substantially similar subject matter as claim 3 respectively, and is rejected with the same rationale, mutatis mutandis. Claims 2, 4, 7 – 8, 11, 13, 16 – 17 are rejected under 35 U.S.C. 103 as being unpatentable over Liu et al. (NPL: “Lifelong Federated Reinforcement Learning: A Learning Architecture for Navigation in Cloud Robotic Systems”) in view of Yang et al. (U.S. Pub. 11294383 B2), further in view of Lerner et al. (U.S. Pub. 2020/0349833 A1). Regarding Claim 2, the rejection of claim 1 is incorporated herein. The combination of Liu and Yang does not explicitly teach the environment perception data set is received via one or more high-throughput communication networks. However, Lerner explicitly teaches that the environment perception data set is received via one or more high-throughput communication networks. ([0078] of Lerner states “Computing component 500 might also include a communications interface 524. Communications interface 524 might be used to allow software and data to be transferred between computing component 500 and external devices. Examples of communications interface 524 might include a modem or softmodem, a network interface (such as Ethernet, network interface card, IEEE 802.XX or other interface).”) It would have been obvious to one with ordinary skill in the art before the effective filing date of the invention to combine the teachings of Liu, Yang, and Lerner because both are using federated learning approach where local models are trained and used to update the central model. One with ordinary skill in the art would be motivated to incorporate the teachings of Lerner into that of Liu and Yang since adding Lerner’s real-time perception data collection technique would predictably improve the accuracy and adaptability of the models in the federated learning system described in the combination of Liu and Yang systems. Regarding Claim 4, the rejection of claim 1 is incorporated herein. Furthermore, the combination of Liu, Yang, and Lerner teach the one or more real-time sensors comprises a real-time video sensor, a real-time image sensor, a real-time motion sensor, a real-time location sensor, or a combination thereof. ([0021] of Lerner states "The vehicle 120 can include one or more sensors, shown in greater detail in FIG. 2. Sensors of vehicle 120 can capture real-time data enabling the vehicle 120 to monitor the current road conditions. The one or more sensors can be configured to capture, generate, and/or acquire data that is characteristic of a surrounding driving environment, inclusive of objects (e.g., buildings, pedestrians, other vehicles, etc.), terrain (e.g., trees, weather, etc.), traffic, weather, and other traits of the outside area surrounding the vehicle 120. In some examples, the sensor(s) can capture, generate, and/or acquire data corresponding to: a location (e.g., a relative location, a global location, coordinates, etc.) of vehicle 120; a heading (e.g., a relative heading, an absolute heading, etc.) of vehicle 120; and/or a change in heading of the vehicle 120; and data pertaining to a surrounding driving environment (e.g., traffic volume, weather, hazards), just to name a few possibilities”) Regarding Claim 7, the rejection of claim 1 is incorporated herein. Furthermore, the combination of Liu, Yang, and Lerner teach the one or more real-time sensors are embodied within a computing device. ([0021] of Lerner states "The vehicle 120 can include one or more sensors, shown in greater detail in FIG. 2. Sensors of vehicle 120 can capture real-time data enabling the vehicle 120 to monitor the current road conditions. The one or more sensors can be configured to capture, generate, and/or acquire data that is characteristic of a surrounding driving environment, inclusive of objects (e.g., buildings, pedestrians, other vehicles, etc.), terrain (e.g., trees, weather, etc.), traffic, weather, and other traits of the outside area surrounding the vehicle 120. In some examples, the sensor(s) can capture, generate, and/or acquire data corresponding to: a location (e.g., a relative location, a global location, coordinates, etc.) of vehicle 120; a heading (e.g., a relative heading, an absolute heading, etc.) of vehicle 120; and/or a change in heading of the vehicle 120; and data pertaining to a surrounding driving environment (e.g., traffic volume, weather, hazards), just to name a few possibilities”) Regarding Claim 8, the rejection of claim 7 is incorporated herein. Furthermore, the combination of Liu, Yang, and Lerner teach at least one of the one or more real-time sensors is external to the computing device. ([0029] of Lerner states “As seen in FIG. 1, a vehicle-infrastructure system 100 can be implemented to include a communication network 110, which enables cooperative communication for the exchange of data between vehicle 120 and one or more components external to the vehicle 120, such as additional vehicles 101A-101C, road condition services 102, and infrastructure device(s) 103. Examples of such communication include vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communications. In referring to the federated learning aspects described above, vehicle 120 can use V2V communications to collect real-time data from nearby vehicles 101A-101C.”) Regarding claims 11, 13, 16 – 17, the rejection of claim 10 is incorporated herein. Claims 11, 13, 16 – 17 recite substantially similar subject matter as claims 2, 4, 7 – 8 respectively, and are rejected with the same rationale, mutatis mutandis. Claims 5, 14 are rejected under 35 U.S.C. 103 as being unpatentable over Liu et al. (NPL: “Lifelong Federated Reinforcement Learning: A Learning Architecture for Navigation in Cloud Robotic Systems”) in view of Yang et al. (U.S. Pub. 11294383 B2), further in view of Breckenridge et al. (U.S. Pub. 2012/0191630 A1). Regarding Claim 5, the rejection of claim 1 is incorporated herein. The combination of Liu and Yang teaches that receive, from the real-time data central learning system, an updated central model trained based at least in part on the updated individual model and a plurality of other updated individual models associated with a plurality of other computing devices; (Pg. 4557 A. Procedure of LFRL section of Liu states “In LFRL, Robot 2 and Robot 3 download the shared model 1G as the initial actor model in reinforcement learning. Then they can get their private networks Q2 and Q3 through reinforcement learning in Environment 2 and Environment 3. After completing the training, LFRL uploads Q2 and Q3 to the cloud. In the cloud, strategy models Q2 and Q3 will be fused into shared model 1G, and then shared model 2G will be generated.”) and apply a second environment perception data set to the model. (Pg. 4557 Fig. 2 Proposed Architecture of Liu shows the 2nd robot using the environment perception data from Environment 2 to apply on Private model and shared model 2G through reinforcement learning.) The combination of Liu and Yang does not explicitly teach compare first accuracy data associated with the updated individual model and second accuracy data associated with the updated central model to determine a preferred model representing the updated individual model or the updated central model; and apply [second data] to the preferred model.However, Breckenridge teach that compare first accuracy data associated with the updated individual model and second accuracy data associated with the updated central model to determine a preferred model representing the updated individual model or the updated central model; ([0066] of Breckenridge states “In some implementations, the effectiveness score of the retrained predictive model is compared to the effectiveness score of the trained predictive model from which the retrained predictive model was derived. If the retrained predictive model is more effective, then the retrained predictive model can replace the initially trained predictive model in the predictive model repository 215. If the retrained predictive model is less effective, then it can be discarded.” And [0065] of Breckenridge states “That is, an effectiveness score can be generated, for example, in the manner described above. In some implementations, the effective score of a retrained predictive model is determined by tallying the results from the initial cross-validation (i.e., done for the updateable predictive model from which the retrained predictive was generated) and adding in the retrained predictive model's score on each new piece of training data. By way of illustrative example, consider Model A that was trained with a batch of 100 training samples and has an estimated 67% accuracy as determined from cross-validation. Model A then is updated (i.e., retrained) with 10 new training samples, and the retrained Model A gets 5 predictive outputs correct and 5 predictive outputs incorrect. The retrained Model A's accuracy can be calculated as (67+5)/(100+10)=65%.”) and apply [second data] to the preferred model. ([0057] of Breckenridge states “That is, after the initial training data had been uploaded by the client computing system and used to train multiple predictive models, at least one of which was then made accessible to the client computing system, additional new training data becomes available.” And [0065] of Breckenridge states “Each retrained predictive model that is generated using the new training data from the training data queue”) It would have been obvious to one with ordinary skill in the art before the effective filing date of the invention to combine the teachings of Yang, Liu and Breckenridge because Breckenridge teaches a method to retrain and update predictive models using new data, by evaluating models with effectiveness scores and selecting the better model to use. One with ordinary skill in the art would be motivated to incorporate the teachings of Breckenridge into that of Yang and Liu in order to ensure that the federated learning system consistently chooses the accurate and effective model. The combination would improve system performance using the known techniques. Regarding claim 14, the rejection of claim 10 is incorporated herein. Claim 14 recites substantially similar subject matter as claim 5 respectively, and is rejected with the same rationale, mutatis mutandis. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to BYUNGKWON HAN whose telephone number is (571)272-5294. The examiner can normally be reached M-F: 9:00AM-6PM PST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Li B Zhen can be reached at (571)272-3768. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BYUNGKWON HAN/Examiner, Art Unit 2121 /Li B. Zhen/Supervisory Patent Examiner, Art Unit 2121
Read full office action

Prosecution Timeline

Aug 20, 2021
Application Filed
Aug 29, 2025
Non-Final Rejection — §103
Dec 09, 2025
Response Filed
Mar 17, 2026
Final Rejection — §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
0%
Grant Probability
0%
With Interview (+0.0%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 1 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month