Prosecution Insights
Last updated: April 19, 2026
Application No. 18/901,951

CAMERA PEEK INTO TURN

Non-Final OA §103§DP
Filed
Sep 30, 2024
Examiner
HESS, MICHAEL J
Art Unit
2481
Tech Center
2400 — Computer Networks
Assignee
Waymo LLC
OA Round
1 (Non-Final)
44%
Grant Probability
Moderate
1-2
OA Rounds
3y 1m
To Grant
52%
With Interview

Examiner Intelligence

Grants 44% of resolved cases
44%
Career Allow Rate
183 granted / 418 resolved
-14.2% vs TC avg
Moderate +8% lift
Without
With
+7.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
66 currently pending
Career history
484
Total Applications
across all art units

Statute-Specific Performance

§101
4.6%
-35.4% vs TC avg
§103
56.8%
+16.8% vs TC avg
§102
10.3%
-29.7% vs TC avg
§112
20.8%
-19.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 418 resolved cases

Office Action

§103 §DP
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claim Objections Claims 17–20 are objected to because of the following informalities: Claim 17’s preamble states, “when executed,” but does not affirmatively recite that the execution is performed by one or more computing devices, thus the instructions stored on the computer readable medium are not specifically tethered to a computer or processor. MPEP 2111.05(III) explains that machine-readable media should have a relationship between the programming and the computer system “with which it is associated” in order to demonstrate a functional relationship. Claim 17 merely states that the instructions are executed and that those instructions cause a (perhaps separate) computer device to perform an action. This is not quite as specific as tying together the instructions being executed with a computing device executing those instructions. Instructions that may cause another (separate) computing device to perform some action can be broadly construed as not requiring the instructions themselves be executed by a computing device (or similar). Examiner recommends the preamble be amended in a manner similar to, or consistent with, the following: A non-transitory computer-readable medium storing instructions which, when executed by one or more computing devices, cause the one or more computing devices to perform a method comprising:…” Appropriate correction is required. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1–20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1–17 of U.S. Patent No. 1,2137,285 B2. Although the claims at issue are not identical, they are not patentably distinct from each other because the claims represent substantially overlapping subject matter regarding the anticipation of an upcoming turn of an autonomous vehicle based on distance and/or speed and changing a virtual camera’s orientation with respect to the turn. Certain features of the present claims are mapped to the reference patent in the following sentences. The currently claimed unprotected turns was thoroughly addressed in the appeals of the parent case(s). The currently claimed greater speed corresponding to greater distance in claim 5 is found in the reference patent’s claim 2. The currently claimed height and pitch changes in claim 6 are found in the reference patent’s claim 9. The currently claimed mapping of angles in claim 8 is found in the reference patent’s claim 3. The preceding exemplary findings are sufficient for a prima facie showing, but are not and do not need to be exhaustive, regarding the finding of substantial overlapping subject matter to support the propriety of the rejection. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 2, 5–10, and 13–18 are rejected under 35 U.S.C. 103 as being unpatentable over Fujimoto (JPH1089988 A), Kusayanagi (US 2018/0286095 A1), and Abramson (US 2017/0279957 A1). Examiner incorporates into the rejection of these claims the rationales and finds of both Examiner and PTAB relevant to these claimed features from the appeals of App. Nos. 15/364,914 and 17/207,787 wherein Examiner was affirmed or affirmed-in-part, respectively. Regarding claim 1, the combination of Fujimoto, Kusayanagi, and Abramson teaches or suggests a computer implemented method, comprising: maneuvering, by one or more computing devices, a vehicle operating in an autonomous mode, according to a first heading (Fujimoto does not appear to teach that the navigation system is used in autonomous vehicles; Kusayanagi, ¶ 0116: teaches the technology is applicable to autonomous vehicles; Kusayanagi, ¶ 0025: teaches a navigation device that is used to project a vehicle’s travel route); determining, by the one or more computing devices, (1) a first location where the vehicle is planning to perform a change from the first heading to a second heading (Fujimoto, ¶ 0022: teaches a distance calculated for the vehicle and compared against a threshold distance to determine when the vehicle is close enough to the intersection (turn) that the virtual viewpoint should be changed from a forward-view to an offset (right or left) view) and (2) a distance from the first location that is selected based on a current speed of the vehicle (Fujimoto, ¶ 0022: teaches a distance calculated for the vehicle and compared against a threshold distance to determine when the vehicle is close enough to the intersection (turn) that the virtual viewpoint should be changed from a forward-view to an offset (right or left) view; While Fujimoto teaches a threshold distance used to inform the changing of the virtual viewpoint of the camera and while the skilled artisan knows that distance thresholds in this art benefit from also understanding the speed of the vehicle so that greater distances are utilized for greater speeds of the vehicle (the time (t) is really the crucial factor, which everyone knows is distance/speed), it has been suggested in appeals of related applications in this family that a more definitive teaching of this relationship would be beneficial to the record; Therefore, while Fujimoto is viewed sufficient given the obvious nature of distance related to speed and time, Abramson unequivocally teaches the skilled artisan had in their possession the solution of utilizing distance and speed to calculate a response of a vehicle system to an upcoming turn; Abramson, ¶ 1132: teaches a vehicle navigation system calculating a distance before a turn to activate a turn signal wherein the distance takes into account “current speed and other conditions.”; Examiner further notes Kusayanagi, ¶ 0058: teaches a virtual viewpoint system takes account of speed); upon determining that a current location of the vehicle is less than or equal to the selected distance from the first location (Fujimoto, ¶ 0022: teaches a distance calculated for the vehicle and compared against a threshold distance to determine when the vehicle is close enough to the intersection (turn) that the virtual viewpoint should be changed from a forward-view to an offset (right or left) view), adjusting, by the one or more computing devices, an orientation of a virtual camera relative to the vehicle from an initial orientation corresponding to the first heading to an updated orientation corresponding to the second heading; and generating for display, by the one or more computing devices, a video corresponding to the virtual camera's updated orientation, the video being configured for presentation to passengers within the vehicle as the vehicle is operating in the autonomous mode (Fujimoto does not appear to teach that the navigation system is used in autonomous vehicles; Kusayanagi, ¶ 0116: teaches the technology is applicable to autonomous vehicles; Fujimoto, Abstract and ¶¶ 0022 and 0024–0026: teach changing an angle of a virtual viewpoint camera prior to a vehicle encountering an intersection so that the camera looks down the projected path of the vehicle at an angle commensurate with the turning direction identified by the navigation system wherein the distance from the intersection informs the timing of the adjustment to the virtual angle; Examiner notes Nix, not relied upon for the rejection of this claim also teaches this feature in e.g. ¶¶ 0048–0049; Examiner further incorporates into the rejection of these claims the rationales relevant to these features from the appeals of App. Nos. 15/364,914 and 17/207,787). One of ordinary skill in the art, before the effective filing date of the claimed invention, would have been motivated to combine the elements taught by Fujimoto, with those of Kusayanagi, because both references are drawn to the same field of endeavor such that one wishing to practice virtual camera functionality utilizing a vehicle’s systems and navigational capabilities would be led to their relevant teachings and because combining Fujimoto’s virtual camera viewpoint changes using vehicle turning information from vehicle navigation information, with Kusayanagi’s virtual viewpoint camera changes using vehicle turning information and navigation information for autonomous driving vehicles represents a mere combination of prior art elements, according to known methods, to yield a predictable result. This rationale applies to all combinations of Fujimoto and Kusayanagi used in this Office Action unless otherwise noted. One of ordinary skill in the art, before the effective filing date of the claimed invention, would have been motivated to combine the elements taught by Fujimoto and Kusayanagi, with those of Abramson, because all three references are drawn to the same field of endeavor such that one wishing to develop vehicle safety (e.g. driver-assistance) and navigation systems would be led to their relevant teachings and because combining Fujimoto’s distance threshold for an upcoming turn with Abramson’s distance and speed considerations for an upcoming turn (intersection) represents a mere combination of prior art elements, according to known methods, to yield a predictable result. This rationale applies to all combinations of Fujimoto, Kusayanagi, and Abramson used in this Office Action unless otherwise noted. Regarding claim 2, the combination of Fujimoto, Kusayanagi, and Abramson teaches or suggests the computer implemented method of claim 1, wherein the first location is an intersection at which the vehicle has planned to make a turn (Both Fujimoto and Abramson are drawn to an intersection; Fujimoto, ¶ 0022: teaches a distance calculated for the vehicle and compared against a threshold distance to determine when the vehicle is close enough to the intersection (turn) that the virtual viewpoint should be changed from a forward-view to an offset (right or left) view; Abramson, ¶ 1132: teaches a vehicle navigation system calculating a distance before a turn to activate a turn signal wherein the distance takes into account “current speed and other conditions.”). Regarding claim 5, the combination of Fujimoto, Kusayanagi, and Abramson teaches or suggests the computer implemented method of claim 1, wherein when the vehicle is traveling at a first rate of speed, the selected distance is greater than when the vehicle is traveling at a second rate of speed slower than the first rate of speed (Kusayanagi, ¶ 0058: teaches a virtual viewpoint system takes account of speed; Distance as a function of speed is a basic concept in vehicle safety systems; Think of braking distance, reaction distance, etc.; Abramson, ¶ 1132: teaches a vehicle navigation system calculating a distance before a turn to activate a turn signal wherein the distance takes into account “current speed and other conditions.”; Examiner further notes that the relationship between distance and vehicle speed is taught in driver’s manuals for state driving exams across the country and is well-within the common knowledge of the skilled artisan). Regarding claim 6, the combination of Fujimoto, Kusayanagi, and Abramson teaches or suggests the computer implemented method of claim 1, wherein a height and a pitch of the virtual camera are adjusted to present either more or less of the vehicle's surroundings during a driving operation in the autonomous mode (Kusayanagi, Figs. 2A–2C: illustrate changing a height and pitch of a virtual camera; Examiner notes other relevant cited art under the Conclusion Section of this Office Action). Regarding claim 7, the combination of Fujimoto, Kusayanagi, and Abramson teaches or suggests the computer implemented method of claim 1, further comprising rotating the virtual camera by a camera rotation angle corresponding to the change from the first heading to the second heading (Kusayanagi, Figs. 3 and 6–8: illustrate that the angle of the virtual viewpoint depends on the angle of the turn). Regarding claim 8, the combination of Fujimoto, Kusayanagi, and Abramson teaches or suggests the computer implemented method of claim 7, further comprising mapping one or more angles associated with the change from the first heading to the second heading to the camera rotation angle (Kusayanagi, Figs. 3 and 6–8: illustrate that the angle of the virtual viewpoint depends (is mapped to) on the angle of the turn). Claim 9 lists the same elements as claim 1, but in apparatus form rather than method form. Therefore, the rationale for the rejection of claim 1 applies to the instant claim. Claim 10 lists the same elements as claim 2, but in apparatus form rather than method form. Therefore, the rationale for the rejection of claim 2 applies to the instant claim. Claim 13 lists the same elements as claim 5, but in apparatus form rather than method form. Therefore, the rationale for the rejection of claim 5 applies to the instant claim. Claim 14 lists the same elements as claim 6, but in apparatus form rather than method form. Therefore, the rationale for the rejection of claim 6 applies to the instant claim. Claim 15 lists the same elements as claim 7, but in apparatus form rather than method form. Therefore, the rationale for the rejection of claim 7 applies to the instant claim. Claim 16 lists the same elements as claim 8, but in apparatus form rather than method form. Therefore, the rationale for the rejection of claim 8 applies to the instant claim. Claim 17 lists the same elements as claim 1, but in CRM form rather than method form. Therefore, the rationale for the rejection of claim 1 applies to the instant claim. Claim 18 lists the same elements as claim 2, but in CRM form rather than method form. Therefore, the rationale for the rejection of claim 2 applies to the instant claim. Claims 3, 4, 11, 12, 19 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Fujimoto, Kusayanagi, Abramson, and Nix (US 2019/0164430 A1). Regarding claim 3, the combination of Fujimoto, Kusayanagi, Abramson, and Nix teaches or suggests the computer implemented method of claim 2, wherein the turn is an unprotected turn (Examiner finds this is an intended use recitation, but nevertheless the prior art teaches the feature; Nix, Figs. 4A and 4C: teach blind intersections handled by the virtual camera system wherein the turn may not require other vehicles to stop, such as the scenarios depicted in Nix, Figs. 4A and 4C). One of ordinary skill in the art, before the effective filing date of the claimed invention, would have been motivated to combine the elements taught by Fujimoto, Kusayanagi, and Abramson, with those of Nix, because all four references are drawn to the same field of endeavor such that one wishing to practice vehicle safety systems would be led to their relevant teachings, because at least Fujimoto, Kusayanagi, and Nix are all drawn to virtual camera viewpoints used in vehicle navigation and are reasonably pertinent to the problem due to their overlapping essential functions, and because combining Fujimoto’s virtual camera viewpoint changes using vehicle turning information from vehicle navigation information, with Kusayanagi’s virtual viewpoint camera changes using vehicle turning information and navigation information for autonomous driving vehicles, and further with Nix’s virtual viewpoint based on a projected turn not being turned all the way represents a mere combination of prior art elements, according to known methods, to yield a predictable result. This rationale applies to all combinations of Fujimoto, Kusayanagi, Abramson, and Nix used in this Office Action unless otherwise noted. Regarding claim 4, the combination of Fujimoto, Kusayanagi, Abramson, and Nix teaches or suggests the computer implemented method of claim 3, wherein the unprotected turn is either where the vehicle is turning onto a road where traffic is detected to be moving, or where the road the vehicle is turning onto does not require other vehicles to stop at the intersection (Examiner finds this is an intended use recitation, but nevertheless the prior art teaches the feature; Nix, Figs. 4A and 4C: teach blind intersections handled by the virtual camera system wherein the turn may not require other vehicles to stop, such as the scenarios depicted in Nix, Figs. 4A and 4C). Claim 11 lists the same elements as claim 3, but in apparatus form rather than method form. Therefore, the rationale for the rejection of claim 3 applies to the instant claim. Claim 12 lists the same elements as claim 4, but in apparatus form rather than method form. Therefore, the rationale for the rejection of claim 4 applies to the instant claim. Claim 19 lists the same elements as claim 3, but in CRM form rather than method form. Therefore, the rationale for the rejection of claim 3 applies to the instant claim. Claim 20 lists the same elements as claim 4, but in CRM form rather than method form. Therefore, the rationale for the rejection of claim 4 applies to the instant claim. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Abramson (US 2017/0279957 A1) teaches a vehicle navigation system calculating a distance before a turn to activate a turn signal wherein the distance takes into account “current speed and other conditions.” (¶ 1132). Zelman (US 2017/0190334 A1) teaches prediction algorithm cues for determining host vehicle intent can include “distance to the intersection, turn signal activity, velocity of the host vehicle,” etc. (¶ 0031). Yamada (US 2014/0292805 A1) teaches a virtual viewpoint changing to a diagonal-front view according to an indication from the turn signal indicator wherein the angle of the view is determined based on the “predicted traveling direction” of the vehicle (e.g. ¶ 0122). Kojima, et al., “NaviView: Visual Assistance by Virtual Mirrors at Blind Intersection,” October 2005. Moore (US 2015/0345976 A1) teaches virtual camera turns ahead of actual turns (e.g. ¶¶ 0038–0030). Vulcano (US 2014/0365126 A1) teaches a virtual camera following turn-by-turn navigation input (e.g. ¶ 0093) and taking into account threshold distances from intersections (¶ 0317). Taylor (US 2014/0192181 A1) teaches generating a virtual view (¶ 0089) and taking account of the projected next step in a navigation such as a threshold distance from an intersection (e.g. ¶ 0108). Tertoolen (US 10,527,445 B2) teaches a three-dimensional perspective virtual camera view positioned at an elevation and pitch angle behind a current position of the vehicle that is updated to follow along as the vehicle travels along a planned route and adapting the generated view, in response to detecting that the determined current lane in which the device is travelling differs from a lane or lanes associated with a maneuver to be made at an upcoming decision point (Abstract) and generates a fast-forward display along the route that is faster than the current speed of the vehicle (col. 6, ll. 31–38). Hiramatsu (US 2020/0026284 A1) teaches an autonomous vehicle being approved to move through an intersection (Abstract), detecting a congestion condition at the intersection (e.g. ¶ 0020), autonomous driving control along a followed travel route including at an intersection and during a temporary stop (e.g. ¶ 0027), and teaches detecting a distance to an upcoming intersection or a time-distance in view of vehicle speed (¶¶ 0033 and 0045). Any inquiry concerning this communication or earlier communications from the examiner should be directed to Michael J Hess whose telephone number is (571)270-7933. The examiner can normally be reached Mon - Fri 9:00am-5:30pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Vaughn can be reached on (571)272-3922. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8933. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL J HESS/Examiner, Art Unit 2481
Read full office action

Prosecution Timeline

Sep 30, 2024
Application Filed
Apr 02, 2026
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12563195
Method And An Apparatus for Encoding and Decoding of Digital Image/Video Material
2y 5m to grant Granted Feb 24, 2026
Patent 12563208
PICTURE CODING METHOD, PICTURE CODING APPARATUS, PICTURE DECODING METHOD, AND PICTURE DECODING APPARATUS
2y 5m to grant Granted Feb 24, 2026
Patent 12556737
MOTION COMPENSATION FOR VIDEO ENCODING AND DECODING
2y 5m to grant Granted Feb 17, 2026
Patent 12556747
ARRAY BASED RESIDUAL CODING ON NON-DYADIC BLOCKS
2y 5m to grant Granted Feb 17, 2026
Patent 12549728
METHOD AND APPARATUS FOR CODING VIDEO DATA IN TRANSFORM-SKIP MODE
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
44%
Grant Probability
52%
With Interview (+7.7%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 418 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month