Prosecution Insights
Last updated: April 19, 2026
Application No. 18/317,850

SYSTEMS AND METHODS FOR USING A PARTITIONED DEEP NEURAL NETWORK WITH A CONSTRAINED DATA CAP

Non-Final OA §102§112
Filed
May 15, 2023
Examiner
KARWAN, SIHAR A
Art Unit
3658
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Steering Solutions Ip Holding Corporation
OA Round
3 (Non-Final)
56%
Grant Probability
Moderate
3-4
OA Rounds
3y 3m
To Grant
82%
With Interview

Examiner Intelligence

Grants 56% of resolved cases
56%
Career Allow Rate
215 granted / 385 resolved
+3.8% vs TC avg
Strong +26% interview lift
Without
With
+25.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
41 currently pending
Career history
426
Total Applications
across all art units

Statute-Specific Performance

§101
11.2%
-28.8% vs TC avg
§103
27.8%
-12.2% vs TC avg
§102
33.4%
-6.6% vs TC avg
§112
16.4%
-23.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 385 resolved cases

Office Action

§102 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Claims 1-20 are pending. Claims 1-20 are rejected. Amendments to the claims have been recorded. Applicant’s Arguments Applicant argues are fully addressed with the new rejections made to the newly provided amendments. In practicing compact prosecution, Examiner has conducted a new search. It is noted that claim limitations of “diagnostic information” is simply information as modern car all have diagnostic ports and busses that transmit and receive information i.e. OBD2. However, in an attempt to compact prosecution, Examiner recommends amending the claims to further define “diagnostic information”. The new search has found the art of Han US 2015/0120145 para 64 which states “The steering system unit 10 is disposed at four wheels of a vehicle, controls steering of the wheels and collects and transmits the state information of the wheels. Further, when any one of steering systems in the steering system unit 10 breaks, the steering system transmits fault information to the control unit 20 and the steering systems without a fault receive corrected steering angles calculated by the control unit 20 and control the steering angle of their wheels.” Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claim 20 rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. No support for Applicant amendments of “selectively use at least one of vector quantization and entropy coding” was readily found in the specifications. Para 40 of applicant’s specifications relate “selectively control a motor of the steering system using torque…” Not “selectively use at least one of vector quantization and entropy coding”. ‘Selectively’ was an example given by Examiner, as such the 112(b) rejection is not overcome. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims rejected as failing to define the invention in the manner required by 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph. The claim(s) are narrative in form and replete with indefinite language. The structure which goes to make up the device must be clearly and positively specified. The structure must be organized and correlated in such a manner as to present a complete operative device. The claim(s) must be in one sentence form only. Note the format of the claims in the patent(s) cited. The claims are generally narrative and indefinite, failing to conform with current U.S. practice. They appear to be a literal translation into English from a foreign document and are replete with grammatical and idiomatic errors. Claim 20 recites “use vector quantization and entropy coding to further compress, using the first machine learning model,” The claim is unclear which limitation is being used, or if both are used and how are the limitation are being used? Is it simultaneously, feedback, sequentially, or selectively? Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Huang US 2021015771. 1. A method using a partitioned deep neural network with a constrained data cap, the method comprising: receiving, at a first machine learning model, raw data from a controller of a steering system; 21; receive sensor data [sensor data is raw data]; Also 46; The encoding system can use the resulting estimated distribution to losslessly compress the raw sequential stream of symbols representing the nodes of the tree-based structure using an entropy coding algorithm such as arithmetic coding. Also; 27; can generate specific control signals for the autonomous vehicle (e.g., adjust steering, braking, velocity, and so on). generating, using a first encoder of the first machine learning model, compressed code using the raw data; 5, using the contextual information [raw sensor data] as input to a machine-learned model, 18; the present disclosure is directed to systems and methods for generating a compressed and encoded representation of point cloud (e.g., LIDAR) data. Also 21; receive sensor data about the environment, perceive objects [perceived objects are models and not the object themselves] within the vehicle's surrounding environment (e.g., other vehicles), identifying, using the first machine learning model, values, of a plurality of values of the compressed code, that are outside of a value range; 127; Shannon's source coding theorem, the cross-entropy between q(x) and p(x) provides a tight lower bound on the bitrate achievable by arithmetic or range coding algorithms such that the better q(x) approximates p(x), the lower the true bitrate. The entropy model can be trained to minimize the cross-entropy loss between the models predicted distribution q and the distribution of training data. It is noted that anything outside the theorem is outside the range. generating, using a first head of the first machine learning model, a prediction value for each identified value, the prediction value for each respective value of the identified values predicting whether a respective value indicates an anomaly in the raw data; 126; eight-bit symbols (e.g., representing a node) in a plurality of eight-bit symbols, entropy model can be to generate [at a first head node i.e. 404] an estimate distribution q(x.sub.n) for a particular symbol x.sub.n that minimizes the difference (or cross-extropy) [anomaly] with the actual distribution of symbols for the particular symbol p(x.sub.n) further compressing, using the first machine learning model using vector quantization, portions of the compressed code associated with prediction values that are greater than a threshold, 129, where x.sub.an(i)={x.sub.pa(i), x.sub.pa(pa(i)), . . . , X.sub.pa( . . . (pa(i)))} with |x.sub.an(i)|≤K can be the set of ancestor nodes of a given node i, up to a given order K. [any value outside of x.sub.an(i) are greater than a threshold K] Also 56; to generating statistical data based on the feature data associated with nodes higher in the tree hierarchy. 62; fed through a linear layer [quantized] and softmax to output intensity probability values. In some examples, the intensity value is an 8-bit integer, so the resulting probability vector [quantized by layers] is 256-dimensional p(r.sub.i.sup.(t)|X.sup.(t)|P.sup.(t−1);w)). wherein the portion of the compressed code are compressed from a continuous vector space to a discreet space; 63; The data encoding system can then encode the byte streams using the entropy model discussed above to produce a compressed bitstream. 66; The data encoding system can divide the three-dimensional space [continuous vector space] into eight sub-areas [discreet space]. communicating the portions of the compressed code to a second machine learning model; 67; the data encoding system [parent] can add a node representing the respective sub-area as a child node [second machine learning model] of the initial node in the tree-based data structure. receiving, from a second head of the second machine learning model, diagnostics information responsive to the portions of the compressed code, where the first head [404; child consumes 404 and 408] Also; 124; contextual information has been calculated by the encoding system [compressing code] and first machine learning mode consume less space and compute cycles than the second head and the second machine learning model; and [402; parent consumes 402; 406; 404; 408]. Also see 2144.04(c.) design choice controlling the steering system, using the controller of the steering system, based on the diagnostic information. 27; can generate specific control signals for the autonomous vehicle (e.g., adjust steering, braking, velocity, and so on). 2. The method of claim 1, further comprising, in response to receiving the diagnostics information, initiating at least one corrective action procedure. 62; The data encoding system can leverage temporal correlations 3. The method of claim 1, wherein the diagnostics information includes at least one of issue classification information, severity information, and monitoring parameter information. 205; training technique is backwards propagation of errors [issue classification information as error]. 4. The method of claim 1, wherein the first machine learning model is disposed within a vehicle.Fig.1 #102 5. The method of claim 1, wherein the second machine learning model is disposed on a remote computing device. Fig.1 #106 6. The method of claim 5, wherein the remote computing device is associated with a cloud computing infrastructure. Associated with Fig.1 #108 8. The method of claim 1, wherein the steering system includes an electronic power steering system. Intended use of electric power steering. 9. The method of claim 1, wherein further compressing, using the first machine learning model, the portions of the compressed code associated with prediction values that are greater than the threshold further includes using vector quantization. 154 and 167; the data encoding system 606 can quantize and encode the point cloud data into an octree representation (e.g., via octree generated 610) where leaves represent the quantized points and intermediate nodes contain 8-bit symbols representing child occupancies. This feature can be fed through a linear layer and softmax to output intensity probability values. In some examples, the intensity value is an 8-bit integer, so the resulting probability vector is 256-dimensional p(r.sub.i.sup.(t)|X(t)|P.sup.(t−1);w)). 10. The method of claim 1, wherein further compressing, using the first machine learning model, the portions of the compressed code associated with prediction values that are greater than the threshold further includes using entropy coding. 46; using an entropy coding algorithm. 11-16 are rejected using the same rejections as made to claims 1-6. 18. is rejected using the same rejection as made to claim 10. Intended use of will know method of vector quantization. 19. is rejected using the same rejection as made to claim 9. 20. is rejected using the same rejection as made to claim 1. Intended use of well-known method of vector quantization and entropy coding. Claim 20 is broader in scope then claim 1 based on Markush grouping. Citation of Pertinent Prior Art The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Han US 20150120145 para 64; [0064] The steering system unit 10 is disposed at four wheels of a vehicle, controls steering of the wheels and collects and transmits the state information of the wheels. Further, when any one of steering systems in the steering system unit 10 breaks, the steering system transmits fault information to the control unit 20 and the steering systems without a fault receive corrected steering angles calculated by the control unit 20 and control the steering angle of their wheels. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SIHAR A KARWAN whose telephone number is (571)272-2747. The examiner can normally be reached on M-F 11am.-7pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ramon Mercado can be reached on 571-270-5744. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SIHAR A KARWAN/Examiner, Art Unit 3664
Read full office action

Prosecution Timeline

May 15, 2023
Application Filed
Jul 07, 2025
Non-Final Rejection — §102, §112
Sep 08, 2025
Applicant Interview (Telephonic)
Sep 08, 2025
Examiner Interview Summary
Oct 09, 2025
Response Filed
Oct 27, 2025
Final Rejection — §102, §112
Dec 29, 2025
Response after Non-Final Action
Jan 29, 2026
Request for Continued Examination
Feb 22, 2026
Response after Non-Final Action
Mar 06, 2026
Non-Final Rejection — §102, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12589502
CARGO-HANDLING APPARATUS, CONTROL DEVICE, CONTROL METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12589750
VEHICULAR CONTROL SYSTEM
2y 5m to grant Granted Mar 31, 2026
Patent 12589504
SYSTEM AND METHOD FOR COGNITIVE SURVEILLANCE ROBOT FOR SECURING INDOOR SPACES
2y 5m to grant Granted Mar 31, 2026
Patent 12583100
ROBOT TO WHICH DIRECT TEACHING IS APPLIED
2y 5m to grant Granted Mar 24, 2026
Patent 12576516
HUMAN SKILL BASED PATH GENERATION
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
56%
Grant Probability
82%
With Interview (+25.8%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 385 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month