Prosecution Insights
Last updated: April 19, 2026
Application No. 18/667,359

Information Processing Method and Apparatus

Final Rejection §103
Filed
May 17, 2024
Examiner
ELL, MATTHEW
Art Unit
2141
Tech Center
2100 — Computer Architecture & Software
Assignee
Shenzhen Yinwang Intelligent Technologies Co., Ltd.
OA Round
2 (Final)
66%
Grant Probability
Favorable
3-4
OA Rounds
4y 1m
To Grant
89%
With Interview

Examiner Intelligence

Grants 66% — above average
66%
Career Allow Rate
252 granted / 380 resolved
+11.3% vs TC avg
Strong +22% interview lift
Without
With
+22.4%
Interview Lift
resolved cases with interview
Typical timeline
4y 1m
Avg Prosecution
12 currently pending
Career history
392
Total Applications
across all art units

Statute-Specific Performance

§101
14.1%
-25.9% vs TC avg
§103
49.3%
+9.3% vs TC avg
§102
16.8%
-23.2% vs TC avg
§112
14.6%
-25.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 380 resolved cases

Office Action

§103
DETAILED ACTION This office action is responsive to the amendment in the above identified application filed November 16, 2025. Claims 1, 2, 4-6, 8-14 and 16-23 are pending, all examined and rejected. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on September 23, 2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 2, 4-6, 8-14, 16-19 and 21-23 are rejected under 35 U.S.C. 103 as being unpatentable over Xu, U.S. Patent #10,671,068, issued June 2, 2020 in view of Kim, U.S. PG Pub #2020/0004257 published January 2, 2020 With regard to Independent Claim 1, Xu teaches a method comprising obtaining first information comprising environment information of a terminal in which a first apparatus is located. See e.g., Col. 5:29-33 (“Next, the specification describes a control system that may receive sensor data from different sensors and share sensor data across processing pipelines. An example control system, providing autonomous navigation for a vehicle is then described.”) See also Col. 7:30-32 (“External sensors may be sensors that can monitor one or more aspects of an external environment relative to vehicle…”). The examiner notes that a “vehicle” is a “terminal” and a “sensor” is a “first apparatus” located in the terminal under BRI consistent with the specification. Xu further teaches receiving second information indicating at least one of region information or time information of the terminal. See e.g., Col. 7:32-39, (discussing various types of sensors, including GPS devices which provide “region information” of the terminal.) Xu further teaches inputting the first and second information to [a] predefined algorithm to obtain first sensed information for the terminal and outputting the first sensed information. See e.g., Fig. 1 (showing raw sensor data from various sensors – which as explained above can include the claimed “first information” and “second sensors” being passed into processing pipelines (which include various algorithms). At step 182, the processed sensor data from one sensor’s pipeline can be shared with others. Thus, as the pipeline progresses along, the data that is ultimately output is based on both the claimed “first information” and “second information.” Xu does not explicitly disclose “selecting, from among multiple predefined algorithms in a predefined algorithm set, a predefined algorithm based on the second information, wherein each of the multiple predefined algorithms corresponds to a different region or a different time.” Kim teaches selecting, from among multiple predefined algorithms in a predefined algorithm set, a predefined algorithm based on the second information, wherein each of the multiple predefined algorithms corresponds to a different region or a different time. See e.g., [0011], (discussing acquiring location information (meta info) and then receive an object detection algorithm selected based on the meta info.) See also [0021], (algorithms can be selected based on time as well.) See also [0332], (specific example of algorithm selected based on region and time.) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention having Xu and Kim before them to modify the fusion system and neural networks of Xu with the ability to select algorithms based on region and time as taught by Kim. One would be motivated to do so in order to provide specialized custom resources to improve object detection. With regard to Dependent Claim 2, As discussed above, Xu-Kim teaches all the limitations of Claim 1. Xu-Kim further teaches wherein obtaining the first information comprises detecting the environment information of the terminal or receiving the first information. See e.g., Xu, Col. 5:29-33 (“Next, the specification describes a control system that may receive sensor data from different sensors and share sensor data across processing pipelines. An example control system, providing autonomous navigation for a vehicle is then described.”) See also Col. 7:30-32 (“External sensors may be sensors that can monitor one or more aspects of an external environment relative to vehicle…”). The examiner notes that even though this claim is written in the alternative Xu teaches both alternatives as shown above. With respect to Dependent Claim 4, As discussed above, Xu-Kim teaches all the limitations of Claim 1. Xu-Kim further teaches wherein each of the multiple predefined algorithms is a neural network model. See e.g., Xu, Fig. 4, Col. 9:49-55, (discussing using a neural network in the processing pipelines, thus the information output would be based on a neural network model.) With respect to Dependent Claim 5, As discussed above, Xu-Kim teaches all the limitations of Claim 1. Xu-Kim further teaches wherein outputting the first sensed information comprises sending the first sensed information to a fusion unit. See e.g., Xu, Fig. 1, (first sensed information is sent to “final decision processing 170” which is a fusion unit.) With regard to Independent Claim 6, Claim 6 is similar in scope to Claim 1 and is rejection under a similar rationale. With respect to Dependent Claim 8, As discussed above, Xu-Kim teaches all the limitations of Claim 6. Xu-Kim further teaches wherein each of the multiple predefined algorithms in the predefined algorithm set further correspond to different sensors. See Xu, Fig. 1, showing various sensors with separate processing pipelines, see also Zhang [0088], discussing that the algorithms can be used on different data sets. With regard to Dependent Claims 9 and 16, These claims are similar in scope to Claim 4 and are rejected under a similar rationale. With regard to Dependent Claim 10, As discussed above, Xu-Kim teaches all the limitations of Claim 6. Xu-Kim further teaches wherein the first information is from a sensor. See e.g., Xu, Col. 5:29-33 (“Next, the specification describes a control system that may receive sensor data from different sensors and share sensor data across processing pipelines. An example control system, providing autonomous navigation for a vehicle is then described.”) See also Col. 7:30-32 (“External sensors may be sensors that can monitor one or more aspects of an external environment relative to vehicle…”). With regard to Dependent Claim 11, As discussed above, Xu teaches all the limitations of Claim 6. Xu further teaches wherein the region information comprises one or more of … a region. See e.g., Col. 4:25-32, (discussing image data being buffered into regions.) See also Col. 8:48-65, (discussing similar.) Also note the various citations to GPS and similar navigational systems which provide position information which naturally is a “region.” With regard to Dependent Claim 12, As discussed above, Xu-Kim teaches all the limitations of Claim 6. Xu-Kim further teaches wherein the first apparatus comprises a camera apparatus, a lidar…. See Xu, Col. 7:26-46 (describing several sensor types including cameras, lidar, With regard to Independent Claim 13, Claim 13 is similar in scope to Claim 1 and is rejected under a similar rationale. With regard to Dependent Claim 14, As discussed above, Xu-Kim teaches all the limitations of Claim 13. Xu-Kim further teaches wherein the instructions further cause the apparatus to detect the environment information of the terminal. See also Xu, Col. 7:30-32 (“External sensors may be sensors that can monitor one or more aspects of an external environment relative to vehicle…”). The examiner notes that a “vehicle” is a “terminal” and a “sensor” is a “first apparatus” located in the terminal under BRI consistent with the specification. With regard to Dependent Claim 17, Claim 17 is similar in scope to Claim 5 and is rejected under a similar rationale. With regard to Dependent Claim 18, As discussed above, Xu-Kim teaches all the limitations of Claim 13. Xu-Kim further teaches wherein the instructions further cause the apparatus to receive the first information. See also Xu, Col. 7:30-32 (“External sensors may be sensors that can monitor one or more aspects of an external environment relative to vehicle…”). The examiner notes that a “vehicle” is a “terminal” and a “sensor” is a “first apparatus” located in the terminal under BRI consistent with the specification. With regard to Dependent Claim 19, As discussed above, Xu-Kim teaches all the limitations of Claim 13. Xu-Kim further teaches wherein the second information indicates the region information and the time information. See e.g., Kim, [0011, 0012] (information can include location and time.) With regard to Dependent Claim 21, As discussed above, Xu-Kim teaches all the limitations of Claim 1. Xu-Kim further teaches wherein selecting the predefined algorithm comprises comparing the at least one of region information or time information to the corresponding region or time of each of the multiple predefined algorithms and selecting the predefined algorithm based on a match between the at least one of region information or time information and the corresponding region or time of each of the multiple predefined algorithms. See e.g., [0011], (discussing acquiring location information (meta info) and then receive an object detection algorithm selected based on the meta info.) See also [0021], (algorithms can be selected based on time as well.) See also [0332], (specific example of algorithm selected based on region and time, note specifically the matching which also requires a comparing step.) With regard to Dependent Claims 22 and 23, These claims are similar in scope to Claim 21 and are rejected under a similar rationale. Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Xu in view of Kim further in view of Drayna, U.S. PG Publication #2019/0191424, filed December 23, 2019 With regard to Dependent Claim 20, As discussed above, Xu-Kim teaches all the limitations of Claim 13. Xu-Kim further teaches construct a neural network model …. output the sensed information based on the neural network model. See e.g., Xu, Fig. 4, Col. 9:49-55, (discussing using a neural network in the processing pipelines, thus the information output would be based on a neural network model.) Xu does not teach constructing the neural network model using the region information and the time information. In an analogous art, Drayna discloses construct a neural network model using the region information and the time information. See e.g., [0020], input data for neural network can be based on movement data along travel route, time of day, season, etc. See also [0067], (discussing training data may be generated using GNSS – global navigation satellite system, which see [0030] includes a GPS ). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the fusion system and neural networks of Xu with the ability to be trained (“based on”) specifically on region and time information as discussed by Drayna. One would be motivated to do so to better allow determination and recognition of operational surfaces on which the vehicles are operated. See generally Drayna, [0020]. Response to Arguments In view of the amendments to claim 8 the objection thereof is withdrawn. Applicant’s remarks regarding the prior art rejections are moot in view of the new grounds of rejection necessitated by applicant’s amendment. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MATT ELL whose telephone number is (571)270-3264. The examiner can normally be reached 9-5, M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Dave Wiley can be reached at 571-272-4150. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MATTHEW ELL/Supervisory Patent Examiner, Art Unit 2141
Read full office action

Prosecution Timeline

May 17, 2024
Application Filed
Jun 12, 2024
Response after Non-Final Action
Aug 26, 2025
Non-Final Rejection — §103
Nov 26, 2025
Response Filed
Jan 09, 2026
Final Rejection — §103
Apr 10, 2026
Response after Non-Final Action

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596914
GENERATIVE ADVERSARIAL NEURAL ARCHITECTURE SEARCH
2y 5m to grant Granted Apr 07, 2026
Patent 12596926
SYSTEMS AND METHODS FOR ADJUSTING DATA PROCESSING COMPONENTS FOR NON-OPERATIONAL TARGETS
2y 5m to grant Granted Apr 07, 2026
Patent 12586002
MULTI-POLYTOPE MACHINE FOR CLASSIFICATION
2y 5m to grant Granted Mar 24, 2026
Patent 12585815
INFORMATION MANAGEMENT SYSTEM AND METHOD FOR COMMUNICATION APPLICATION, AND DISPLAY TERMINAL
2y 5m to grant Granted Mar 24, 2026
Patent 12554221
Holographic Calling for Artificial Reality
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
66%
Grant Probability
89%
With Interview (+22.4%)
4y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 380 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month