Prosecution Insights
Last updated: April 19, 2026
Application No. 18/959,521

SYSTEMS AND METHODS FOR INFERRING INFORMATION ABOUT STATIONARY ELEMENTS BASED ON SEMANTIC RELATIONSHIPS

Non-Final OA §101§103§112
Filed
Nov 25, 2024
Examiner
MALKOWSKI, KENNETH J
Art Unit
3667
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Lyft Inc.
OA Round
1 (Non-Final)
75%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
94%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
480 granted / 642 resolved
+22.8% vs TC avg
Strong +19% interview lift
Without
With
+19.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
22 currently pending
Career history
664
Total Applications
across all art units

Statute-Specific Performance

§101
8.3%
-31.7% vs TC avg
§103
40.7%
+0.7% vs TC avg
§102
20.4%
-19.6% vs TC avg
§112
27.7%
-12.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 642 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 1-20 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to a judicial exception (i.e., an abstract idea) without significantly more. In sum, claims 1-20 are rejected under 35 U.S.C. §101 because the claimed invention is directed to a judicial exception to patentability (i.e., a law of nature, a natural phenomenon, or an abstract idea) and do not include an inventive concept that is something “significantly more” than the judicial exception under the January 2019 patentable subject matter eligibility guidance (2019 PEG) analysis which follows. Revised Guidance Step 2A – Prong 1 Under the 2019 PEG step 2A, Prong 1 analysis, it must be determined whether the claims recite an abstract idea that falls within one or more designated categories of patent ineligible subject matter (i.e., organizing human activity, mathematical concepts, and mental processes) that amount to a judicial exception to patentability. Here, with respect to independent claims 1, 12 and 20, the claims recite the abstract idea of: after one or more sensor-equipped vehicles have traversed a real-world environment and captured sensor data that is representative of the real-world environment, performing an analysis of the sensor data; based on the analysis of the sensor data, deriving a set of information about a traffic light within the real-world environment that includes one or more of: (i) signal-face information that comprises an identification of each signal face of the traffic light or (ii) traffic-rule information that comprises an indication of at least one traffic rule that is applicable to the traffic light; and encoding the derived set of information about the traffic light into a map for the real-world environment. Specifically, a mental process, that can be performed in the human mind since the above limitations could alternatively be performed in the human mind or with the aid of pen and paper. This conclusion follows from CyberSource Corp. v. Retail Decisions, Inc., where our reviewing court held that section 101 did not embrace a process defined simply as using a computer to perform a series of mental steps that people, aware of each step, can and regularly do perform in their heads. 654 F.3d 1366, 1373 (Fed. Cir. 2011); see also In re Grams, 888 F.2d 835, 840–41 (Fed. Cir. 1989); In re Meyer, 688 F.2d 789, 794–95 (CCPA 1982); Elec. Power Group, LLC v. Alstom S.A., 830 F. 3d 1350, 1354–1354 (Fed. Cir. 2016) (“we have treated analyzing information by steps people go through in their minds, or by mathematical algorithms, without more, as essentially mental processes within the abstract-idea category”). For example, a human mind can perform an analysis on sensor data after it is captured including mentally deriving information about a traffic light including signal face information or traffic rule information. Furthermore, mental processes remain unpatentable even when automated to reduce the burden on the user of what once could have been done with pen and paper. See CyberSource, 654 F.3d at 1375 (“That purely mental processes can be unpatentable, even when performed by a computer, was precisely the holding of the Supreme Court in Gottschalk v. Benson.”) such that a human could perform the encoding step by drawing information onto a map. Revised Guidance Step 2A – Prong 2 Under the 2019 PEG step 2A, Prong 2 analysis, the identified abstract idea to which the claim is directed does not include limitations that integrate the abstract idea into a practical application, since the recited features of the abstract idea are being applied on a computer or computing device or via software programming that is simply being used as a tool (“apply it”) to implement the abstract idea. (See, e.g., MPEP §2106.05(f)). This follows conclusion follows from the claim limitations which only recite a generic “non-transitory computer readable medium” (claim 12); a “computing system”, “processor” (claim 20) outside of the abstract idea. Claim 1 does not include additional elements outside of the abstract idea. In addition, merely “[u]sing a computer to accelerate an ineligible mental process does not make that process patent-eligible.” Bancorp Servs., L.L.C. v. Sun Life Assur. Co. of Canada (U.S.), 687 F.3d 1266, 1279 (Fed. Cir. 2012); see also CLS Bank Int’l v. Alice Corp. Pty. Ltd., 717 F.3d 1269, 1286 (Fed. Cir. 2013) (en banc) (“simply appending generic computer functionality to lend speed or efficiency to the performance of an otherwise abstract concept does not meaningfully limit claim scope for purposes of patent eligibility.”), aff’d, 573 U.S. 208 (2014). Accordingly, the additional element of a controller does not transform the abstract idea into a practical application of the abstract idea. In addition, the passively recited limitation “after one or more sensor-equipped vehicles have traversed a real-world environment and captured sensor data that is representative of the real-world environment” constitutes insignificant pre-solution activity that merely gathers data and, therefore, do not integrate the exception into a practical application. See In re Bilski, 545 F.3d 943, 963 (Fed. Cir. 2008) (en banc), aff’d on other grounds, 561 U.S. 593 (2010) (characterizing data gathering steps as insignificant extra-solution activity); see also CyberSource, 654 F.3d at 1371–72 (noting that even if some physical steps are required to obtain information from a database (e.g., entering a query via a keyboard, clicking a mouse), such data-gathering steps cannot alone confer patentability); OIP Techs., Inc. v. Amazon.com, Inc., 788 F.3d 1359, 1363 (Fed. Cir. 2015) (presenting offers and gathering statistics amounted to mere data gathering). Accord Guidance, 84 Fed. Reg. at 55 (citing MPEP § 2106.05(g)). Furthermore, the limitation “encoding the derived set of information about the traffic light into a map for the real-world environment” is insignificant post-solution activity. The Supreme Court guides that the “prohibition against patenting abstract ideas ‘cannot be circumvented by attempting to limit the use of the formula to a particular technological environment’ or [by] adding ‘insignificant post-solution activity.’” Bilski, 561 U.S. at 610–11 (quoting Diehr, 450 U.S. at 191–92). Revised Guidance Step 2B Under the 2019 PEG step 2B analysis, the additional elements are evaluated to determine whether they amount to something “significantly more” than the recited abstract idea. (i.e., an innovative concept). Here, the additional elements, such as a non-transitory computer readable medium”; “computing system”, and a “processor” do not amount to an innovative concept since, as stated above in the step 2A, Prong 2 analysis, the claims are simply using the additional elements as a tool to carry out the abstract idea (i.e., “apply it”) on a computer or computing device and/or via software programming (See, e.g., MPEP §2106.05(f)). The additional elements are specified at a high level of generality to simply implement the abstract idea and are not themselves being technologically improved. See, e.g., MPEP §2106.05 I.A; Alice, 573 U.S. at 223 (“[T]he mere recitation of a generic computer cannot transform a patent-ineligible abstract idea into a patent-eligible invention.”). Thus, these elements, taken individually or together, do not amount to “significantly more” than the abstract ideas themselves. The additional elements of the dependent claims merely refine and further limit the abstract idea of the independent claims and do not add any feature that is an “inventive concept” which cures the deficiencies of their respective parent claim under the 2019 PEG analysis. None of the dependent claims considered individually, including their respective limitations, include an “inventive concept” of some additional element or combination of elements sufficient to ensure that the claims in practice amount to something “significantly more” than patent-ineligible subject matter to which the claims are directed. The elements of the instant process steps when taken in combination do not offer substantially more than the sum of the functions of the elements when each is taken alone. The claims as a whole, do not amount to significantly more than the abstract idea itself because the claims do not effect an improvement to another technology or technical field; the claims do not amount to an improvement to the functioning of an electronic device itself which implements the abstract idea (e.g., the general purpose computer and/or the computer system which implements the process are not made more efficient or technologically improved); the claims do not perform a transformation or reduction of a particular article to a different state or thing (i.e., the claims do not use the abstract idea in the claimed process to bring about a physical change. See, e.g., Diamond v. Diehr, 450 U.S. 175 (1981), where a physical change, and thus patentability, was imparted by the claimed process; contrast, Parker v. Flook, 437 U.S. 584 (1978), where a physical change, and thus patentability, was not imparted by the claimed process); and the claims do not move beyond a general link of the use of the abstract idea to a particular technological environment (e.g., “after one or more sensor-equipped vehicles have traversed a real-world environment” claim 1; “a map for a real-world environment”, claim 1). Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claims 10-11 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. With respect to claim 10, the limitations are unclear because limitations i) -iii) do not include a conjunctive term such as “and”, “or”, “and/or”, etc.. For example, in claim 1, options i) and ii) have the conjunctive “or”. Accordingly, under the broadest reasonable interpretation, the claims are interpreted as having the conjunctive “or” as a default interpretation unless the claim is provided with a clarifying amendment. Claim 11 is rejected on the basis of dependency. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. 20180188060 to Wheeler et al. (Wheeler) in view of U.S. 20170262709 to Wellington et al. (Wellington) With respect to claims 1, 12 and 20, Wheeler discloses a computer-implemented method comprising: after one or more sensor-equipped vehicles have traversed a real-world environment and captured sensor data that is representative of the real-world environment, performing an analysis of the sensor data; (i.e., probe vehicles 150a-150d, 910, FIG. 9 “Receive an image with a traffic sign capture by a camera mounted on a vehicle”; ¶¶ 59-63, 65, 70) based on the analysis of the sensor data, deriving a set of information about a traffic light within the real-world environment (¶¶ 200 “Features are everything on a map that is . . . automatically generated . . . interpolate the geometry among the control points . . . primary features . . . a sign or lane boundary that is automatically generated. . . primary features . . . traffic lights and traffic signs . . . Derived features are features that are inferred and constructed from primary features. The properties of a derived feature depends on other features”; 220 “another type of primary feature, an association link is used to annotate a lane element. Similar to the lane connector, the association link may connect a traffic light to a lane element that it controls . . . the system can infer which lane elements are controlled by a traffic light from the geometric relationship of the lane element to the traffic light . . . For example, this may be based on traffic light orientation and distance. The association link may also connect a traffic sign to a lane element it controls. Traffic signs like yield signs, stop signs and speed limits are similar to traffic lights . . . association link is used to specify which lane element is controlled by the traffic sign”); and encoding the derived set of information about the traffic light into a map for the real-world environment. (¶¶ 200 features on a map automatically generated . . . traffic lights, traffic signs; 93 Examples of road signs described in an HD map include stop signs, traffic lights; 102 “features 720a and 720b that are associated with the lane . . . HD map system 100 stores a lane-centric representation of data that represents the relationship of the lane to the feature”; 186 lane element graph allows navigation of autonomous vehicles through a mapped area. Each lane element is associated with the traffic restrictions that apply to it such as speed limit, speed bump, and traffic signs and signals . . . semantic association between lane elements and features (e.g., speed limit in current lane element) to assist in on-vehicle routing and planning needs”; 190 Each lane elements can be associated with features that only affect local lane elements (e.g., stop sign, yield sign, or traffic light)”; 197-198) (i.e., relationship between stop sign 720a, traffic light 720b, FIG. 7 and corresponding description; ¶¶ 102-104; FIG. 11 traffic sign 1105, traffic sign text 1130, HD map 1150; FIG. 41 lane boundaries geometry, traffic sign input to lane element graph as derived features, derived attributes speed limit, left only, etc.; ¶ 106-110 sign creation in HD maps . . . encode semantic data) (claim 9 “determining a type of the traffic sign . . . storing the type of the traffic sign as an attribute of the traffic sign in the three-dimensional map”; claim 10 “determining text on the traffic sign based on a neural network analysis of characters on the portion of the image corresponding to the traffic sign; and storing the text on the traffic sign as an attribute of the traffic sign”; FIG. 11 stop, traffic sign text 1130; ¶7 “characteristics of the traffic sign can be stored as attributes of the traffic sign in the HD map, e.g., text of the traffic sign, limitations described by the traffic sign, or type of traffic sign”; ¶¶ 89, 113, 137 “determine legal limitations dependent on the text. For example, the text "STOP" necessitates a legal requirement to come to a full stop at an intersection with the traffic sign. These further limitations may be additionally stored as additional attributes of the traffic sign in the HD map 1150”). (section “sign creation in HD maps” ¶¶ 106-120, wherein the disclosure includes traffic lights under the rubric of signs, i.e., ¶ 93) Wheeler fails to explicitly disclose the derived information includes one or more of (i) signal-face information that comprises an identification of each signal face of the traffic light or (ii) traffic-rule information that comprises an indication of at least one traffic rule that is applicable to the traffic light Wellington, from the same field of endeavor, discloses derived information includes one or more of (i) signal-face information that comprises an identification of each signal face of the traffic light or (ii) traffic-rule information that comprises an indication of at least one traffic rule that is applicable to the traffic light (signal face information: FIG. 5 face 512, 514, Subset 522, 524; 705, 707 Fig. 7; ¶¶ 41 detect traffic signaling systems; 44; 63 identify the traffic signal faces for a particular direction of an intersection, label the points of interest identifying the exact locations of the signal faces, determine the subsets for the traffic signal system, correlate the subsets to the lanes, and store each labeled map in a mapping log; 65 signal map 427 can indicate the nature of each signal face ( e.g., the number of signal bulbs, the locations of signal bulbs, the size and locations of the faces themselves, the subset(s) for the signal faces, the lane(s) associated with the subset(s), and the like”; 68 traffic signal analysis system 400 can continuously operate to receive and monitor the image data 407 for traffic signals (traffic rule information: i.e., ¶ 75 “if subset 524 were in the red state, the signal analysis system 400 could determine that the autonomous vehicle may proceed with the right turn action after coming to a stop while yielding to right-of-way vehicles (unless otherwise indicated by, for example, a “no turn on red” sign). Accordingly, in one example, the signal analysis system 400 can monitor the intersection, using the camera 405, for any limiting signs that place a qualifier on a particular action (e.g., “no turn on red” limiting sign for right-turn actions)”; 72 system 400 can further monitor subset 524 to ultimately determine whether the autonomous vehicle can perform the left turn or U-turn action through the intersection”) Accordingly, it would have been obvious to one of ordinary skill in the art at the time of effective filing date for the system of Wheeler to derive the above cited set of information about a traffic light, as taught by Wellington, in order to improve autonomous vehicle decision making at intersections and thereby improve safety (Wellington, ¶¶ 1-2). With respect to claims 2 and 13, Wheeler in view of Wellington disclose the signal-face information further comprises an indication of at least one activation sequence for the signal faces of the traffic light. (Wellington, ¶¶ 51 “As the autonomous vehicle 200 approaches the intersection, the image processor 265 can continuously update the template and project the template onto the image data 262 to perform a probabilistic matching operation to determine the state ( e.g., red, yellow, green) of each subset of the traffic signal system.”; 60 Once the traffic signal 330 is detected, the traffic signal analysis system can perform operations to dynamically determine and/or predict the state of the traffic signal for the pass-through action of the autonomous vehicle 310 as the vehicle approaches the intersection, “Traffic Signal Analysis System” section, i.e., ¶¶ 64 “perform a probabilistic state matching operation dynamically, as the autonomous vehicle approaches the traffic signal system, to determine the state of the traffic signaling system for each pass-through action (e.g., left turn, right turn, straight through action) in real time”; 76 matching signal map 427 can also indicate timing characteristics of the traffic signaling system 500. For example, the timing characteristics can indicate how long a particular light on a particular subset will remain in a yellow state ( or a red state or green state). The signal analysis system 400 can utilize the timing characteristics to predict an estimated state for the passthrough action when the autonomous vehicle reaches the intersection”; 72; claim 11) With respect to claims 3 and 14, Wheeler in view of Wellington disclose the at least one activation sequence for the signal faces of the traffic light comprises multiple activation sequences corresponding to different times of day. (Wellington, ¶¶ 58 combine sensor data taken at different times; 62-63 catalog of signal maps for every direction of every intersection . . .identify traffic signal faces . . . label . . . identify the exact location, correlate subsets . . . in a mapping log . . . generate signal maps for future vehicles traveling though given region in which signal maps 433 were recorded; i.e., multiple activation sequences traffic signal system continually records the traffic signal and dynamically updates the signal map ¶¶ 14, 51, 63-64; similarly see Wheeler for updates on the same landmark at different timestamps, i.e., 62, 64, 86, 58 freshness of data by ensuring that the map is updated to reflect changes on the road within a reasonable time frame; 79, 106) (Wellington, ¶¶ 51 “As the autonomous vehicle 200 approaches the intersection, the image processor 265 can continuously update the template and project the template onto the image data 262 to perform a probabilistic matching operation to determine the state ( e.g., red, yellow, green) of each subset of the traffic signal system.”; 60 Once the traffic signal 330 is detected, the traffic signal analysis system can perform operations to dynamically determine and/or predict the state of the traffic signal for the pass-through action of the autonomous vehicle 310 as the vehicle approaches the intersection, “Traffic Signal Analysis System” section, i.e., ¶¶ 64 “perform a probabilistic state matching operation dynamically, as the autonomous vehicle approaches the traffic signal system, to determine the state of the traffic signaling system for each pass-through action (e.g., left turn, right turn, straight through action) in real time”; 76 matching signal map 427 can also indicate timing characteristics of the traffic signaling system 500. For example, the timing characteristics can indicate how long a particular light on a particular subset will remain in a yellow state (or a red state or green state). The signal analysis system 400 can utilize the timing characteristics to predict an estimated state for the passthrough action when the autonomous vehicle reaches the intersection”; 72; claim 11) With respect to claims 4 and 15, Wheeler in view of Wellington disclose the signal-face information further comprises, for each signal face of the traffic light, an indication of a respective length of time during which the signal face is activated. (Wellington, ¶¶ 51 “As the autonomous vehicle 200 approaches the intersection, the image processor 265 can continuously update the template and project the template onto the image data 262 to perform a probabilistic matching operation to determine the state ( e.g., red, yellow, green) of each subset of the traffic signal system.”; 60 Once the traffic signal 330 is detected, the traffic signal analysis system can perform operations to dynamically determine and/or predict the state of the traffic signal for the pass-through action of the autonomous vehicle 310 as the vehicle approaches the intersection, “Traffic Signal Analysis System” section, i.e., ¶¶ 64 “perform a probabilistic state matching operation dynamically, as the autonomous vehicle approaches the traffic signal system, to determine the state of the traffic signaling system for each pass-through action (e.g., left turn, right turn, straight through action) in real time”; 76 matching signal map 427 can also indicate timing characteristics of the traffic signaling system 500. For example, the timing characteristics can indicate how long a particular light on a particular subset will remain in a yellow state ( or a red state or green state). The signal analysis system 400 can utilize the timing characteristics to predict an estimated state for the passthrough action when the autonomous vehicle reaches the intersection”; 72; claim 11) With respect to claims 5 and 16, Wheeler in view of Wellington disclose the at least one traffic rule that is applicable to the traffic light comprises a traffic rule that is applicable to a given signal face of the traffic light. (Wellington, ¶ 75 “if subset 524 were in the red state, the signal analysis system 400 could determine that the autonomous vehicle may proceed with the right turn action after coming to a stop while yielding to right-of-way vehicles (unless otherwise indicated by, for example, a “no turn on red” sign). Accordingly, in one example, the signal analysis system 400 can monitor the intersection, using the camera 405, for any limiting signs that place a qualifier on a particular action (e.g., “no turn on red” limiting sign for right-turn actions)”; 72 system 400 can further monitor subset 524 to ultimately determine whether the autonomous vehicle can perform the left turn or U-turn action through the intersection”). With respect to claims 6 and 17, Wheeler in view of Wellington disclose the derived set of information about the traffic light further includes lane-control information that comprises an indication of a given lane that is controlled by a given signal face of the traffic light. (Wellington, ¶¶ 75 “if subset 524 were in the red state, the signal analysis system 400 could determine that the autonomous vehicle may proceed with the right turn action after coming to a stop while yielding to right-of-way vehicles (unless otherwise indicated by, for example, a “no turn on red” sign). Accordingly, in one example, the signal analysis system 400 can monitor the intersection, using the camera 405, for any limiting signs that place a qualifier on a particular action (e.g., “no turn on red” limiting sign for right-turn actions)”; 72 system 400 can further monitor subset 524 to ultimately determine whether the autonomous vehicle can perform the left turn or U-turn action through the intersection”; 63 “identify the traffic signal faces for a particular direction of an intersection, label the points of interest identifying the exact locations of the signal faces, determine the subsets for the traffic signal system, correlate the subsets to the lanes, and store each labeled map in a mapping log”) With respect to claims 7 and 18, Wheeler in view of Wellington disclose the identification of each signal face of the traffic light comprises: a location of the signal face within the traffic light and a type of the signal face. (Wellington, ¶¶ 63 “identify the traffic signal faces for a particular direction of an intersection, label the points of interest identifying the exact locations of the signal faces, determine the subsets for the traffic signal system, correlate the subsets to the lanes, and store each labeled map in a mapping log”; FIUG. 5, face/ face subset 512, 514, 522, 524, 526, templates 530; FIG. 7, 720-724; 2 Road intersections include traffic signaling systems that can range from simple three-bulb faces to complex directional and yielding signals; 14, 16, 63-66) With respect to claims 8 and 19, Wheeler in view of Wellington disclose the sensor data is captured over a multi-day period of time. (Wellington, ¶¶ 44-45 signal maps 233 in the database 230 can include previously recorded traffic signal data verified for accuracy and quality . . . compare the sensor data 257 from the sensor array 255 with a current data sub-map 238; 58 combine sensor data taken at different times; 62-63 catalog of signal maps for every direction of every intersection . . .identify traffic signal faces . . . label . . . identify the exact location, correlate subsets . . . in a mapping log . . . generate signal maps for future vehicles traveling though given region in which signal maps 433 were recorded; i.e., multiple activation sequences traffic signal system continually records the traffic signal and dynamically updates the signal map ¶¶ 14, 51, 63-64; similarly see Wheeler for updates on the same landmark at different timestamps, i.e., 62, 64, 86, 58 freshness of data by ensuring that the map is updated to reflect changes on the road within a reasonable time frame; 79, 106) (Wellington, ¶¶ 51 “As the autonomous vehicle 200 approaches the intersection, the image processor 265 can continuously update the template and project the template onto the image data 262 to perform a probabilistic matching operation to determine the state ( e.g., red, yellow, green) of each subset of the traffic signal system.”; 60 Once the traffic signal 330 is detected, the traffic signal analysis system can perform operations to dynamically determine and/or predict the state of the traffic signal for the pass-through action of the autonomous vehicle 310 as the vehicle approaches the intersection, “Traffic Signal Analysis System” section, i.e., ¶¶ 64 “perform a probabilistic state matching operation dynamically, as the autonomous vehicle approaches the traffic signal system, to determine the state of the traffic signaling system for each pass-through action (e.g., left turn, right turn, straight through action) in real time”; 76 matching signal map 427 can also indicate timing characteristics of the traffic signaling system 500. For example, the timing characteristics can indicate how long a particular light on a particular subset will remain in a yellow state (or a red state or green state). The signal analysis system 400 can utilize the timing characteristics to predict an estimated state for the passthrough action when the autonomous vehicle reaches the intersection”; 72; claim 11) With respect to claim 10, Wheeler in view of Wellington disclose deriving the set of information about the traffic light comprises: deriving the traffic-rule information by (i) detecting a semantic relationship1 between the traffic light and one or more other stationary elements within the real-world environment, (ii) performing an analysis of the one or more other stationary elements, (iii) based on the analysis of the one or more other stationary elements, deriving the indication of the at least one traffic rule that is applicable to the traffic light. (Wheeler, ¶ 220 “another type of primary feature, an association link is used to annotate a lane element. Similar to the lane connector, the association link may connect a traffic light to a lane element that it controls . . . the system can infer which lane elements are controlled by a traffic light from the geometric relationship of the lane element to the traffic light . . . For example, this may be based on traffic light orientation and distance. The association link may also connect a traffic sign to a lane element it controls. Traffic signs like yield signs, stop signs and speed limits are similar to traffic lights . . . association link is used to specify which lane element is controlled by the traffic sign”; ¶102 “features 720a and 720b that are associated with the lane . . . HD map system 100 stores a lane-centric representation of data that represents the relationship of the lane to the feature”; 186 lane element graph allows navigation of autonomous vehicles through a mapped area. Each lane element is associated with the traffic restrictions that apply to it such as speed limit, speed bump, and traffic signs and signals . . . semantic association between lane elements and features (e.g., speed limit in current lane element) to assist in on-vehicle routing and planning needs”; 190 Each lane elements can be associated with features that only affect local lane elements (e.g., stop sign, yield sign, or traffic light)”; 197-198; i.e., relationship between stop sign 720a, traffic light 720b, FIG. 7 and corresponding description; ¶¶ 102-104; FIG. 11 traffic sign 1105, traffic sign text 1130, HD map 1150; FIG. 41 lane boundaries geometry, traffic sign input to lane element graph as derived features, derived attributes speed limit, left only, etc.; ¶ 106-110 sign creation in HD maps . . . encode semantic data) (Wellington, i.e., traffic light and sign are in same intersection, ¶ 75 “if subset 524 were in the red state, the signal analysis system 400 could determine that the autonomous vehicle may proceed with the right turn action after coming to a stop while yielding to right-of-way vehicles (unless otherwise indicated by, for example, a “no turn on red” sign). Accordingly, in one example, the signal analysis system 400 can monitor the intersection, using the camera 405, for any limiting signs that place a qualifier on a particular action (e.g., “no turn on red” limiting sign for right-turn actions)”; 72 system 400 can further monitor subset 524 to ultimately determine whether the autonomous vehicle can perform the left turn or U-turn action through the intersection”) With respect to claim 11, Wheeler in view of Wellington disclose the analysis of the one or more other stationary elements comprises an analysis of one or both of text or images appearing on a traffic sign. (Wheeler, FIG. 11, 1130 “traffic sign text”, “STOP”; ¶¶ 7 characteristics of the traffic sign can be stored as attributes of the traffic sign in the HD map, e.g., text of the traffic sign, limitations described by the traffic sign, or type of traffic sign; 20, 60, 89, 113, 137) (Wellington, ¶ 75 “if subset 524 were in the red state, the signal analysis system 400 could determine that the autonomous vehicle may proceed with the right turn action after coming to a stop while yielding to right-of-way vehicles (unless otherwise indicated by, for example, a “no turn on red” sign). Accordingly, in one example, the signal analysis system 400 can monitor the intersection, using the camera 405, for any limiting signs that place a qualifier on a particular action (e.g., “no turn on red” limiting sign for right-turn actions)”; 72 system 400 can further monitor subset 524 to ultimately determine whether the autonomous vehicle can perform the left turn or U-turn action through the intersection”) Prior Art The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: US 20200133283 A1 is cited to disclose: FIG. 7A-8B 47 whether the vehicle 10 is prohibited from proceeding in a specific direction even when the traffic signal emits a red light is different depending on whether there is a road sign indicating that the vehicle cannot proceed in a specific direction 48-49 FIG. 7A displays a traffic signal to be detected on which a traffic light 701 emitting a red light . . . the kind of the object to be detected (in this example, the traffic signal) for the area in which the traffic light 701 is displayed are tagged . . . a range 703 of the direction in which the vehicle 10 can proceed is tagged for the traffic light 701. In this example, the range 703 is set in the right direction of the vehicle 10 (in other words, turning right is possible). . . FIG. 7B displays a traffic signal including a traffic light 711 emitting a red light and a road sign 712 representing that turning right is prohibited when the traffic signal emits a red light . . . a circumscribed rectangle 713 representing an area in which the traffic light 711 is displayed, and the kind of the object to be detected for the area in which the traffic light 711 is displayed are tagged. Furthermore, since there is the road sign 712 in the teaching image 710, there is no direction in which the vehicle 10 can proceed. Therefore, a value indicating that the vehicle 10 cannot proceed is set for the representative points of all, the angular segments in the teaching image 710.”; 50 A teaching image 800 illustrated in FIG. 8A and a teaching image 810 illustrated in FIG. 8B are used when whether the vehicle 10 is prohibited from proceeding in a specific direction even when the traffic signal emits a green light is different depending on whether there is a vehicle coming from the opposite direction and a specific road sign 77 classifier pre-trained to output the proceedable certainty degree of each relative bearing with respect to a predetermined direction relative to the vehicle based on the display status of the traffic signal, such as the lighting condition of the traffic signal, a road sign installed around the traffic signal, and the environment around the traffic signal, and acquires, using, the classifier, the proceedable certainty degree of each relative bearing Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to KENNETH J MALKOWSKI whose telephone number is (313)446-4854. The examiner can normally be reached 8:00 AM - 5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Faris Almatrahi can be reached on 313-446-4821. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KENNETH J MALKOWSKI/Primary Examiner, Art Unit 3667 1 See parent published specification US 2021040484, no limiting definition is provided for “semantic relationship” but it can broadly include various indications including types of traffic signs or lights, the side of a road a sign is on, relative positions, sign orientation (¶75), curb markings, types of adjacent objects, types of traffic signs (¶77), trajectories of other agents (¶ 79), spatial relationships, contextual information (¶73), etc.
Read full office action

Prosecution Timeline

Nov 25, 2024
Application Filed
Feb 13, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12589745
VISUAL GUIDANCE METHOD FOR IMPROVING AUTONOMOUS NAVIGATION WITH ROW FOLLOWING CORRECTIONS IN STEREO CAMERA SYSTEMS
2y 5m to grant Granted Mar 31, 2026
Patent 12583443
MOVING BODY CONTROL DEVICE, MOVING BODY CONTROL METHOD, AND MOVING BODY CONTROL PROGRAM
2y 5m to grant Granted Mar 24, 2026
Patent 12571636
METHOD AND DEVICE WITH LANE DETECTION
2y 5m to grant Granted Mar 10, 2026
Patent 12553733
COMPUTER-IMPLEMENTED METHOD FOR BEHAVIOR PLANNING OF AN AT LEAST PARTIALLY AUTOMATED EGO VEHICLE WITH A SPECIFIED NAVIGATION DESTINATION
2y 5m to grant Granted Feb 17, 2026
Patent 12546621
TRAVELING TRACK GENERATION DEVICE AND TRAVELING TRACK GENERATION METHOD
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
75%
Grant Probability
94%
With Interview (+19.1%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 642 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month