Prosecution Insights
Last updated: April 19, 2026
Application No. 18/131,941

TURNING DIRECTION PREDICTION SYSTEM, MOVING SYSTEM, TURNING DIRECTION PREDICTION METHOD, AND PROGRAM

Final Rejection §101§102§112
Filed
Apr 07, 2023
Examiner
ORANGE, DAVID BENJAMIN
Art Unit
2663
Tech Center
2600 — Communications
Assignee
Toyota Jidosha Kabushiki Kaisha
OA Round
4 (Final)
34%
Grant Probability
At Risk
5-6
OA Rounds
3y 7m
To Grant
63%
With Interview

Examiner Intelligence

Grants only 34% of cases
34%
Career Allow Rate
51 granted / 151 resolved
-28.2% vs TC avg
Strong +29% interview lift
Without
With
+29.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
51 currently pending
Career history
202
Total Applications
across all art units

Statute-Specific Performance

§101
13.1%
-26.9% vs TC avg
§103
29.0%
-11.0% vs TC avg
§102
20.2%
-19.8% vs TC avg
§112
32.0%
-8.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 151 resolved cases

Office Action

§101 §102 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments and amendment have persuasively overcome the claim objections and the “various detections” aspect of one of the 112(a) rejections. The remaining issues are addressed below. Interview summary Applicant argues: A tentative agreement was reached regarding claim amendments, as formally presented herein, to overcome the objection to the claims, and the rejections under 35 U.S.C. §112(b), 35 U.S.C. §102(a)(1), and 35 U.S.C. §102(a)(2). Examiner responds: As shown in the interview summary, the examiner did not agree to all of this. 101 Applicant argues: Contrary to the Office Action's position that the features of independent claims 1, 10, and 11 recite features that cover mental processes (Office Action at pp. 7-8), independent claim 1 (and similarly independent claims 10 and 11) now affirmatively recites the portable system comprising a processor configured to: … Examiner responds: As per the rejection, use of a generic computer is still a mental process. Applicant argues: The combination of these additional claim elements, when viewed in its entirety with the rest of the claimed elements, do not constitute well-understood, routine, or conventional activity in the field. Examiner responds: These technologies were well-understood, routine, conventional activity in the field. See, e.g., the Wikipedia page titled “Pedestrian crash avoidance mitigation,” March 29, 2022, retrieved from https://en.wikipedia.org/w/index.php?title=Pedestrian_crash_avoidance_mitigation&oldid=1079880280 (attached). Applicant’s arguments over the prior art are rebutted by the interview summary. Further, the rejection of claim 13 in the December 5, 2025 nonfinal cites to Pindeus [0041] and states “Pindeus’ deep learning teaches the claimed neural network.” Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-8, 10-13, 16, and 17 (all claims) are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claims 1, 10 and 11 recite various predictions. Each of these detections and predictions are an example of claiming a result without reciting the particular structure, materials or steps that accomplish the function or achieve the result. MPEP 2173.05(g). The use of a 3D camera overcame the rejection for the various detections. Claims 1, 10 and 11 recite “infer,” but this is also unlimited functional claiming. MPEP 2173.05(g). Relatedly, the claims recite responding to various changes in a person’s “center of gravity,” but the specification does not teach how a center of gravity is determined. In particular, the specification only teaches a “3D camera” (understood as lidar), but this does not contain any information about the weight of the photographed object. Further, the relevant disclosure appears limited to the sentence “For example, when a person will turn, pushing-out of the center of gravity, control of the direction of the center of gravity, control of the acceleration of the center of gravity, control of the balance of the trunk, and determination of the turning strategy are performed in this order” from specification, p. 7. Claims 1, 10 and 11 recite “machine learning apparatus including a neural network, but this is unlimited functional claiming. MPEP 2173.05(g). Specifying a particular architecture is expected to overcome this rejection (such as convolutional), however, the examiner’s quick review of the specification has not identified any such architectural detail. Dependent claims are likewise rejected. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-8, 10-13, 16, and 17 (all claims) are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites a “portable” system, but recites that the portable system comprises an autonomous vehicle. This is indefinite because the plain meaning of “portable” does not include objects as large as a vehicle. MPEP 2173.05(a). Claim 1 recites “an autonomous vehicle comprising: a control unit; and a processor,” but it is not clear what the control unit is. Control unit avoids means plus function because it is recited as part of an autonomous vehicle, and this provides sufficient structure. The control unit is not the processor because that is separately recited. Specification, p. 15 states “a control unit 8 that controls the movement of the vehicle 50.” However, this is only describing what the control unit does, not what it is. Specification p. 16 states “The control unit 8 is a specific example of a control means,” but the phrase “control means” is also undefined. Page 16 also states “Further, at least one of the warning unit 7 and the control unit 8 may be provided outside the vehicle 50.” The examiner is not clear on what would control the movement of a vehicle from outside the vehicle (e.g., steering control is generally inside a vehicle). Claims 1, 10, and 11 recite “a three-dimensional camera,” but this is new terminology. MPEP 2173.05(a). One option may be to submit a revised translation of just this term from the Japanese parent. For the purposes of examination, this is interpreted as LIDAR because LIDAR generates three dimensional image. Claims 1, 10, and 11 recite “a sequence including at least in the following order: …,” but the recited motions are not sequential, rather they are all the same motion. For example, Figs. 2 and 3 show a person walking. Moving one leg forward is an example of: pushing out of a center of gravity (because the leg has mass, when it moves, the center of gravity changes) controlling a direction of the center of gravity controlling an acceleration of the center of gravity controlling a balance of a trunk of the person (because the leg has mass, when it moves, the balance of the person changes. Here, because the trunk is supported by the legs, moving a leg changes the balance of the trunk of the person). Therefore, it is unclear how to interpret the claimed sequence in order when only one action occurs. Claims 1, 10, and 11 recite “a preliminary motion,” but “preliminary” is a relative term that lacks sufficient guidance. MPEP 2173.05(b). In particular, the claim does not specify what the motion is preliminary to. Claims 1, 10, and 11 recite “after performing a preliminary motion including the controlling the balance of the trunk of the person,” but this describes actions taken by the person, not the computer. Comparing Figs. 1 and 2, the computer detects that a person has made these movements, but the computer does not perform or control them. There is no support in the specification to support interpreting method claim 10 as referring to the person as part of the method. Claims 1, 10, and 11 recite “corresponding to,” but this is subjective because different people can have different opinions as to when this is met. MPEP 2173.05(b)(IV). Specifying an objective standard, such as “is,” overcomes this rejection. Claims 1, 10, and 11 recite “like a top,” but this is subjective because different people can have different opinions as to when this is met. MPEP 2173.05(b)(IV). Dependent claims are likewise rejected. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-8, 10-13, 16, and 17 (all claims) are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea (mental process) without significantly more. Step 1: Claim 1 (and its dependents) recite a system comprising a processor, i.e., a machine, which is eligible subject matter. Claim 10 recites a method, i.e., a process, which is eligible subject matter. Claim 11 recites a non-transitory computer readable medium, i.e., an article of manufacture, which is eligible subject matter. Step 2A, prong one: All of the elements of the claims are a mental process because a driver can look at a pedestrian and decide if the pedestrian looks like they will enter the street. Further, the various models are also mental processes, see example 47, claim 2, element (d) (from the July 2024 AI subject matter eligibility examples). MPEP 2106.04(a)(2)(III)(C) explains that use of a generic computer or in a computer environment is still a mental process. In particular, this section begins by citing Gottschalk v. Benson, 409 US 63 (1972). “The Supreme Court recognized this in Benson, determining that a mathematical algorithm for converting binary coded decimal to pure binary within a computer’s shift register was an abstract idea.” In Benson the Supreme Court did not separately analyze the computer hardware at issue; the specifics of what hardware was claimed is only included in an appendix to the decision. Because there are no additional elements, no further analysis is required for Step 2A, prong two or Step 2B. Examiner Notes While not applied in the present rejection, the examiner believes that it would be appropriate to take official notice that various body poses predict certain movements, such as a person turning their chest toward a street is more likely to cross the street. Pindeus discloses determining intent generally (e.g., Pindeus abstract). This is understood as disclosing the various combinations of body movements leading to various predictions (e.g., stepping towards a curb predicts crossing the street). Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-8, 10-13, 16, and 17 (all claims) are rejected under 35 U.S.C. 102(a)(1) and (a)(2) as being anticipated by U.S. Pat. Pub. 20190176820 (“Pindeus”). 1. A portable system comprising an autonomous vehicle comprising: (Pindeus, [0020] “While vehicle 110 is depicted as a car, vehicle 110 may be any motorized device that is configured to autonomously, or semi-autonomously, navigate near humans.”) a processor configured to detect whether each of left and right legs of a person is in a swing state or a stance state as acquired by a three-dimensional camera; (Pindeus, Fig. 8, data flow 830. Note that the system detects “moving left leg.” [0018] specifies “LIDAR,” which teaches the claimed three-dimensional camera. See also Fig. 4.) detect information about a rotation of a chest of the person around a pitch axis, a yaw axis, and a roll axis as acquired by the three-dimensional camera; and (Pindeus, Fig. 9. See also Fig. 4. Pindeus’s motion vector discloses the claimed axes.) predict a direction in which the person will turn based on the states of the left and right legs detected by the processor and the information about the rotation of the chest detected by the processor; and (Pindeus, [0043] “However, in image 920, human 114 is turning toward the curb, and in image 930, human 114 is facing the curb. Based on these activities, image determination module 332 may determine that the intent of human 114 must be determined to ensure that if the intent of human 114 is to enter the road, vehicle 110 is commanded to avoid creating a hazard for human 114.”) infer a next movement of the person based on their posture and including the following in this order: when the person will turn: (Pindeus, Fig. 8 “Predicted Intent ‘Crossing street’”) pushing out of a center of gravity, (Pindeus, Fig. 8, “Moving left leg” moving the left leg pushes the center of gravity) controlling a direction of the center of gravity, (Pindeus, Fig. 8, “Moving left leg” moving the left leg controls the center of gravity forward) controlling an acceleration of the center of gravity, and (Pindeus, Fig. 8, “Moving left leg” moving the left leg accelerates the center of gravity forward) controlling a balance of a trunk of the person; (Pindeus, Fig. 8, “Moving left leg” note that the person is shown as standing as opposed to falling down, and not falling teaches the claimed controlling a balance. Technically, the BRI of controlling includes falling as well because the balance is intentionally changed.) determine, after performing a preliminary motion including the controlling the balance of the trunk of the person, a turning strategy being a spin turn or a step turn, the spin turn corresponding to a first turning motion in which the person rotates their body like a top around a pivoting foot and swings out a swing leg in a traveling direction, the step turn corresponding to a second turning motion in which the person brings down the swing leg to the ground while keeping the pivoting foot on the ground and then kicks out the swing leg from the ground and swings out the pivoting foot in the traveling direction; and (Pindeus, Fig. 8, “Crossing street.” See also Fig. 9) determine whether the person and the autonomous vehicle are expected to collide without evasive action, (Pindeus, [0043] “Based on these activities, image determination module 332 may determine that the intent of human 114 must be determined to ensure that if the intent of human 114 is to enter the road, vehicle 110 is commanded to avoid creating a hazard for human 114.”) wherein the control unit of the autonomous vehicle being configured to control the autonomous vehicle to evade the person by performing deceleration control and steering control of the autonomous vehicle in response to the processor determining that the person and the autonomous vehicle are expected to collide without evasive action based on the predicted direction in which the person will turn and based on the turning strategy being the spin turn or the step turn, and (Pindeus, claim 4, “4. The method of claim 2, wherein the safe operation mode, when entered, includes commanding the vehicle to perform at least one of swerving, accelerating, ceasing movement, … .”) wherein the processor is configured to learn images of the swing and stance states of the left and right legs of the person by using a machine learning apparatus including a neural network, and detect the swing and stance states by using a result of the learning. (Pindeus, [0018] “For example, intent determination module 332 may use deep learning to translate a pose (and potentially additional information, such as a distance from human 114 to the road) to an intent of human 114.”) 2. The portable system according to Claim 1, wherein the processor is configured to predict the person will turn to a swing-leg direction when it has determined that a spinal column of the person is in an extended state and the chest has rotated and has flexed in a direction opposite to a traveling direction based on the information about the rotation of the chest around the pitch axis, the yaw axis, and the roll axis detected by the processor, and (Pindeus, Fig. 4.) the processor is configured to predict the direction in which the person will turn based on a result of the prediction that the person will turn to the swing-leg direction and the states of the left and right legs of the person detected by the processor. (Pindeus, Fig. 8, data flow 830) 3. The portable system according to Claim 1, wherein the processor is configured to predict the person will turn to a stance-leg direction when it has determined that a spinal column of the person is in a flexed state and the chest has rotated and has flexed in the same direction as the traveling direction based on the information about the rotation of the chest around the pitch axis, the yaw axis, and the roll axis detected by the processor, and (Pindeus, Fig. 4.) the processor is configured to predict the direction in which the person will turn based on a result of the prediction that the person will turn to the stance-leg direction and the states of the left and right legs of the person detected by the processor. (Pindeus, Fig. 8, data flow 840) 4. The portable system according to Claim 1, wherein the processor is configured to detect a direction of a rotation of a neck of the person around the yaw axis, (Pindeus, [0033] “For example, human 114 is depicted in FIG. 5 in various rotated positions, such as positions 510, 520, 530, 540, and 550. In image 540, as compared to image 510, shoulder and neck keypoints are close to one another.” Note Pindeus’ “neck” keypoint.) wherein the processor is configured to predict a tentative direction in which the person will turn based on the states of the left and right legs detected by the processor and the information about the rotation of the chest detected by the processor, and the processor is configured to predict the predicted tentative direction as the direction in which the person will turn when it has determined that the predicted tentative direction coincides with the direction of the rotation of the neck around the yaw axis detected by the processor. (Pindeus, Fig. 8, data flow 820) 5. The portable system according to Claim 4, wherein the processor includes an processor, the processor is configured to detect an eye direction of the person, (Pindeus, [0036] “which respectively map to the human being distracted due to looking to the side,”) wherein the processor is configured to predict a tentative direction in which the person will turn based on the states of the left and right legs detected by the processor and the information about the rotation of the chest detected by the processor, and the processor is configured to predict the predicted tentative direction as the direction in which the person will turn when it has determined that the predicted tentative direction coincides with the eye direction detected by the processor. (Pindeus, Fig. 8, data flow 820) 6. The portable system according to Claim 5, wherein the processor is configured to predict the direction in which the person will turn based on the states of the left and right legs detected by the processor, the information about the rotation of the chest detected by the processor, the direction of the rotation of the neck detected by the processor, and the eye direction detected by the processor. (Pindeus, Fig. 8, data flow 830) 7. The portable system according to Claim 6, wherein the processor is configured to predict a tentative direction in which the person will turn based on the states of the left and right legs detected by the processor and the information about the rotation of the chest detected by the processor, and (Pindeus, Fig. 8) the processor is configured to predict the tentative direction as the direction in which the person will turn when it has determined that all the eye direction, the direction of the rotation of the neck around the yaw axis, and the direction of the rotation of the chest around the yaw axis have successively pointed to the same direction in this order. (Pindeus, Fig. 9, image 930) 8. The portable system according to Claim 1, wherein the autonomous vehicle is configured to warn the person based on the direction in which the person will turn predicted by the processor. (Pindeus, Fig. 2, signal generation device 218) Claims 10 and 11 are rejected for the same rationale as claim 1. Additionally, Pindeus, Fig. 2 and claim 10 discloses the claimed computer readable medium of claim 11. 12. The portable system according to claim 1, wherein the stance state corresponds to a state in which the person, during their walking, stands on the ground on their sole of their foot to support their body, and the swing state corresponds to a state in which the person, during their walking, lifts their foot and swings the lifted foot forward or backward. (Pindeus, Fig. 4) 13. The portable system according to claim 1, wherein the processor is configured to learn images of the swing state, the stance state, and the chest using the neural network. (Pindeus, [0041] “For example, intent determination module 332 may use deep learning to translate a pose.” Pindeus’ deep learning teaches the claimed neural network.) 16. The portable system according to claim 1, wherein the processor is configured to generate a skeletal model of the person based on an acquired image, and detect information about the rotation of the chest of the person around the pitch axis, the yaw axis, and the roll axis based on the generated skeletal model of the person. (Pindeus, Fig. 9. See also Fig. 4. Pindeus’s motion vector discloses the claimed axes. Pindeus’ edges and nodes on the human form teach the claimed skeletal model.) 17. The portable system according to claim 1, wherein the information about the rotation of the chest of the person includes a direction of the rotation, an angle of the rotation, and an amount of the rotation. (Pindeus, Fig. 9. See also Fig. 4.) Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US9604639B2 – titled “Pedestrian-intent-detection for automated vehicles” US11126179B2 – titled “Motion prediction based on appearance” THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID ORANGE whose telephone number is (571)270-1799. The examiner can normally be reached Mon-Fri, 9-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Gregory Morse can be reached at 571-272-3838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DAVID ORANGE/Primary Examiner, Art Unit 2663
Read full office action

Prosecution Timeline

Apr 07, 2023
Application Filed
May 29, 2025
Non-Final Rejection — §101, §102, §112
Jul 18, 2025
Applicant Interview (Telephonic)
Jul 18, 2025
Examiner Interview Summary
Jul 29, 2025
Response Filed
Aug 18, 2025
Final Rejection — §101, §102, §112
Oct 17, 2025
Request for Continued Examination
Oct 24, 2025
Response after Non-Final Action
Dec 03, 2025
Non-Final Rejection — §101, §102, §112
Feb 05, 2026
Applicant Interview (Telephonic)
Feb 07, 2026
Examiner Interview Summary
Feb 10, 2026
Response Filed
Mar 07, 2026
Final Rejection — §101, §102, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12567126
INFRASTRUCTURE-SUPPORTED PERCEPTION SYSTEM FOR CONNECTED VEHICLE APPLICATIONS
2y 5m to grant Granted Mar 03, 2026
Patent 11300964
METHOD AND SYSTEM FOR UPDATING OCCUPANCY MAP FOR A ROBOTIC SYSTEM
2y 5m to grant Granted Apr 12, 2022
Patent 10816794
METHOD FOR DESIGNING ILLUMINATION SYSTEM WITH FREEFORM SURFACE
2y 5m to grant Granted Oct 27, 2020
Patent 10433126
METHOD AND APPARATUS FOR SUPPORTING PUBLIC TRANSPORTATION BY USING V2X SERVICES IN A WIRELESS ACCESS SYSTEM
2y 5m to grant Granted Oct 01, 2019
Patent 10285010
ADAPTIVE TRIGGERING OF RTT RANGING FOR ENHANCED POSITION ACCURACY
2y 5m to grant Granted May 07, 2019
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
34%
Grant Probability
63%
With Interview (+29.4%)
3y 7m
Median Time to Grant
High
PTA Risk
Based on 151 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month