Prosecution Insights
Last updated: April 19, 2026
Application No. 18/327,147

RELATIVE AND GLOBAL POSITION-ORIENTATION MESSAGES

Non-Final OA §101§103
Filed
Jun 01, 2023
Examiner
LEE, JUSTIN S
Art Unit
3668
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Qualcomm Incorporated
OA Round
1 (Non-Final)
74%
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
342 granted / 462 resolved
+22.0% vs TC avg
Strong +26% interview lift
Without
With
+26.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
20 currently pending
Career history
482
Total Applications
across all art units

Statute-Specific Performance

§101
9.3%
-30.7% vs TC avg
§103
54.3%
+14.3% vs TC avg
§102
20.5%
-19.5% vs TC avg
§112
8.6%
-31.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 462 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-25 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an [AltContent: connector]abstract idea without significantly more. [AltContent: connector]101 Analysis – Step 1 [AltContent: connector]Claims 1, 13, and 20 are directed to exchanging position message between vehicle and network to provide position-orientation information of the vehicle. Therefore, independent Claims 1, 13, and 20 are within at least one of the four statutory categories. 101 Analysis – Step 2A, Prong I Regarding Prong I of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether they recite subject matter that falls within one of the follow groups of abstract ideas: a) mathematical concepts, b) certain methods of organizing human activity, and/or c) mental processes. Claims 1, 13, and 20 include limitations that recite an abstract idea (emphasized below) and will be used as a representative claim for the remainder of the 101 rejection. Claim 1 recites: 1. A method of operating a user equipment (UE) associated with a vehicle, comprising: determining first position-orientation information that is associated with a body frame of the vehicle at a first time and is relative to a global coordinate reference frame; generating a global position message comprising the first position-orientation information and a first timestamp that is based on the first time; and transmitting the global position message to a network component. Claim 13 recites: 13. A method of operating a network component, comprising: receiving a global position message from a user equipment (UE) associated with a vehicle, the global position message comprising first position-orientation information, the first position-orientation information associated with a body frame of the vehicle at a first time and is relative to a first global coordinate reference frame; and updating a map based on the global position message. Claim 20 recites: 20. A method of operating a user equipment (UE) associated with a vehicle, comprising: determining first position-orientation information that is associated with a body frame of the vehicle at a first time and is relative to a first global coordinate reference frame; determining second position-orientation information that is associated with the body frame of the vehicle at a second time subsequent to the first time and is relative to a second global coordinate reference frame; determining differential information between the first position-orientation information and the second position-orientation information; generating a relative position message that comprises the differential information, information sufficient to determine the first time, and a second timestamp that is based on the second time; and transmitting the relative position message to a network component. The examiner submits that the foregoing bolded limitation(s) constitute a “mental process” because under its broadest reasonable interpretation, the claim covers performance of the limitation in the human mind. For example, with regards to claim 1, “determining…generating” in the context of this claim encompasses a person determining orientation information of the vehicle in front of him/her based on observation and writing the orientation information observed on a paper using pen. Also, with regards to claim 13, “updating a map…” in the context of this claim encompasses a person annotating a map using paper and pen. Also, with regards to claim 20, “determining…generating” in the context of this claim encompasses a person determining orientation information of the moving vehicle in front of him/her at different times based on observation, further observing the change in orientation/position, and the writing the information observed on a paper using pen. Accordingly, the claims recite at least one abstract idea. 101 Analysis – Step 2A, Prong II Regarding Prong II of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether the claim, as a whole, integrates the abstract into a practical application. As noted in the 2019 PEG, it must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception. The courts have indicated that additional elements merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application” In the present case, the additional limitations beyond the above-noted abstract idea are as follows (where the underlined portions are the “additional limitations” while the bolded portions continue to represent the “abstract idea”) Claim 1 recites: 1. A method of operating a user equipment (UE) associated with a vehicle, comprising: determining first position-orientation information that is associated with a body frame of the vehicle at a first time and is relative to a global coordinate reference frame; generating a global position message comprising the first position-orientation information and a first timestamp that is based on the first time; and transmitting the global position message to a network component. Claim 13 recites: 13. A method of operating a network component, comprising: receiving a global position message from a user equipment (UE) associated with a vehicle, the global position message comprising first position-orientation information, the first position-orientation information associated with a body frame of the vehicle at a first time and is relative to a first global coordinate reference frame; and updating a map based on the global position message. Claim 20 recites: 20. A method of operating a user equipment (UE) associated with a vehicle, comprising: determining first position-orientation information that is associated with a body frame of the vehicle at a first time and is relative to a first global coordinate reference frame; determining second position-orientation information that is associated with the body frame of the vehicle at a second time subsequent to the first time and is relative to a second global coordinate reference frame; determining differential information between the first position-orientation information and the second position-orientation information; generating a relative position message that comprises the differential information, information sufficient to determine the first time, and a second timestamp that is based on the second time; and transmitting the relative position message to a network component. For the following reason, the examiner submits that the above identified additional limitations do not integrate the above-noted abstract idea into a practical application. Regarding the additional limitations of “transmitting…receiving…,” the examiner submits that this limitation is insignificant extra-solution activities that merely use a user equipment associated with a vehicle and network component to perform the process. In particular, the receiving and transmitting step are recited at a high level of generality (i.e. as a general means of transmitting/receiving data) and amounts to mere data transmitting/receiving, which is a form of insignificant extra-solution activity. Also, the mere presence of the user equipment and network component in the vehicle does not integrate the judicial exception into a practical application because it amounts to no more than generally linking the use of the judicial exception in a particular technological environment or field of use (e.g., generally linking outputting the result of the judicial exception in a particular technological environment or field of use). Also, they are recited at a high level of generality that does not distinguish from generic computer component simply performing generic computer functions. Thus, taken alone, the additional elements do not integrate the abstract idea into a practical application. Further, looking at the additional limitation(s) as an ordered combination or as a whole, the limitation(s) add nothing that is not already present when looking at the elements taken individually. For instance, there is no indication that the additional elements, when considered as a whole, reflect an improvement in the functioning of a computer or an improvement to another technology or technical field, apply or use the above-noted judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition, implement/use the above-noted judicial exception with a particular machine or manufacture that is integral to the claim, effect a transformation or reduction of a particular article to a different state or thing, or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is not more than a drafting effort designed to monopolize the exception (MPEP § 2106.05). Accordingly, the additional limitation does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. 101 Analysis – Step 2B Regarding Step 2B of the 2019 PEG, representative independent claims 1, 13, 20 do not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for the same reasons to those discussed above with respect to determining that the claim does not integrate the abstract idea into a practical application. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of “receiving…transmitting…” amount to nothing more than applying the exception using a generic computer component. Generally applying an exception using a generic computer component cannot provide an inventive concept. And as discussed above, the additional limitations of “receiving…transmitting…” the examiner submits that these limitations are insignificant extra-solution activities. Further additional elements, such as “user equipment”, “network component” do not amount to an innovative concept, since as stated above in the step 2A, Prong 2 analysis, the claims are simply using the additional elements as a tool to carry out the abstract idea (i.e., “apply it”) on a generic computer or computing device and/or via software programming. (See, e.g., MPEP 2106.05(f)). The additional elements are specified at a high level of generality to simply implement the abstract idea and are not themselves being technologically improved (See, MPEP 2106.05 I.A.) (Also see specification, paragraph 66). Thus, these elements, taken individually or together, do not amount to significantly more than the abstract ideas themselves. Further, a conclusion that an additional element is insignificant extra-solution activity in Step 2A should be re-evaluated in Step 2B to determine if they are more than what is well- understood, routine, conventional activity in the field. The additional limitations of “receiving…transmitting…” are well-understood, routine, and conventional activities because the specification (par. 3) recites that wireless communications have developed through various generations. MPEP 2106.05(d)(II), and the cases cited therein, including Intellectual Ventures I, LLC v. Symantec Corp., 838 F.3d 1307, 1321 (Fed. Cir. 2016), TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610 (Fed. Cir. 2016), and OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363 (Fed. Cir. 2015), indicate that mere communication of data over a network is a well‐understood, routine, and conventional function when it is claimed in a merely generic manner. Dependent claims 2-12, 14-19, and 21-25 are directed toward additional abstract idea that is capable of being performed via human mind/pen and paper. Also, other limitations are directed toward additional aspects of the judicial exception (e.g. additional insignificant extra-solution activities) and/or well-understood, routine and conventional additional elements that do not integrate the judicial exception into a practical application. Therefore, the claims are not patent eligible under the same rationale as provided for in the rejection of claims 1, 13, and 20. For example, the majority of dependent claims fail to further recite additional steps/methods. Instead, they are merely based on further clarifying the terms introduced in the independent claims (e.g. what position-orientation information, differential information exactly is) Therefore, claims 1 – 25 are ineligible under 35 U.S.C. §101. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-8 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over KIM; Muryong et al. (US 20200217972 A1) in view of XU; Jingwei (US 20230192089 A1) In regards to claim 1, Kim teaches, A method of operating a user equipment (UE) associated with a vehicle, comprising: (See abstract) determining first position-orientation information that is associated with a body frame of the vehicle at a first time and is relative to a global coordinate reference frame; (See abstract, wherein the first 6-DOF pose may comprise a first altitude and one or more first rotational parameters indicative of a first orientation of the vehicle relative to a reference frame…paragraphs 6, 65-68, determining, at a first time, a first 6 degrees of freedom (6-DOF) pose of the vehicle, wherein the first 6-DOF pose comprises a first altitude and one or more first rotational parameters indicative of a first orientation of the vehicle relative to a reference frame…The body frame orientation angle vector θ(t) at a time t may then be obtained based on the body frame orientation angle vector θ(t−1) at a time (t−1) using standard EKF updates. Also see fig. 3C and paragraphs 46-47, The first reference frame or body reference frame 322 is defined by x (e.g. forward), y (e.g. left), and z (e.g. up) orthogonal axes centered on the vehicle body. FIG. 3C also shows a second geographical reference frame given by ENU reference frame 332, which is defined by the East, North, and Up orthogonal axes. The orientation of ego vehicle 130 may be specified using first rotational parameters, which define orientation of the body reference frame 322 relative to the ENU reference frame 332.) transmitting the […] message to a network component. (See fig. 2, paragraph 34, ego vehicle 130 may communicate with and/or receive information from various entities coupled to wireless network 220. For example, ego vehicle 130 may communicate with and/or receive information from AS 210 or cloud-based services over V2N. As another example, ego vehicle 130 may communicate with RSU 222 over communication link 223…paragraph 35, ego vehicle 130 may access AS 210 over V2I communication link 212. AS 210, for example, may be an entity supporting V2X applications that can exchange messages (e.g. over V2N links) with other entities supporting V2X applications. Kim discloses first position-orientation information however, does not specifically teach, generating a global position message comprising the first position-orientation information and a first timestamp that is based on the first time; transmitting the global position message Xu further discloses, generating a global position message comprising the first position-orientation information and a first timestamp that is based on the first time; transmitting the global position message (See paragraph 58, probe data is obtained from vehicles traveling within a road network. The probe data includes a variety of data generated by a vehicle, including at least a location and a timestamp. The location may be identified, for example, by a GPS reading, near field communication locating methods, or any combination of locating means. The probe data of embodiments described herein further includes a vehicle speed and heading. Also see fig. 2 and paragraph 45, 50, The OEM 114 may include a server and a database configured to receive probe data from vehicles) Therefore, it would have been obvious by one of ordinary skilled in the art before the time the invention was effectively filed to modify the method of Kim to further comprise method taught by Xu because using obtained vehicle data for analysis by third party, safety of intersection crossings can be improved (paragraph 28). In regards to claim 2, Kim-Xu teaches the method of claim 1, wherein the first global coordinate reference frame is an Earth-centered Earth-fixed (ECEF) frame or a fixed East-North-Up (ENU) frame. (See Kim paragraph 20, Global coordinate systems include Earth-Centered, Earth Fixed (ECEF), which is a terrestrial coordinate system) that rotates with the Earth and has its origin at the center of the Earth. Geographical frames of reference also include local tangent plane based frames of reference based on the local vertical direction and the earth's axis of rotation. For example, the East, North, Up (ENU) frame of reference may include three coordinates: a position along the northern axis, a position along the eastern axis, and a vertical position (above or below some vertical datum or base measurement point). ) In regards to claim 3, Kim-Xu teaches the method of claim 1, wherein the body frame of the vehicle is a rear axle of the vehicle. (See fig. 3C, and associated paragraphs vehicle contains rear axle inherently) In regards to claim 4, Kim-Xu teaches the method of claim 1, wherein the first position-orientation information includes: a translation of the body frame of the vehicle relative to the global coordinate reference frame, or a rotation of the body frame of the vehicle relative to the global coordinate reference frame, or a combination thereof. (See Kim fig. 3C and paragraph 46-47, The first rotational parameters that describe the rotation of body reference frame 322 relative to the ENU reference frame 332 may be represented, for example, by R.sub.nb 335, where R.sub.nb=[r.sub.1 r.sub.2 r.sub.3], where r.sub.1, r.sub.2, and r.sub.3 are each column vectors (e.g. 3 rows and 1 column) describing rotations of the x, y, and z axes relative to the E, N, and U axes, respectively.) In regards to claim 5, Kim-Xu teaches the method of claim 4, wherein the first position-orientation information includes at least the translation of the body frame of the vehicle relative to the first global coordinate reference frame. (See fig. 3C, paragraphs 46-47, 19, 6 Degrees of Freedom (6DoF) pose refers to three translation components (e.g. given by X, Y, and Z coordinates) and three angular components (e.g. roll, pitch and yaw)… The first reference frame or body reference frame 322 is defined by x (e.g. forward), y (e.g. left), and z (e.g. up) orthogonal axes centered on the vehicle body… The orientation of ego vehicle 130 may be specified using first rotational parameters, which define orientation of the body reference frame 322 relative to the ENU reference frame 332.) In regards to claim 6, Kim-Xu teaches the method of claim 5, wherein the first position-orientation information further includes a covariance of the translation of the body frame of the vehicle relative to the first global coordinate reference frame. (See fig. 3C, paragraphs 46-47, 19, 6 Degrees of Freedom (6DoF) pose refers to three translation components (e.g. given by X, Y, and Z coordinates) and three angular components (e.g. roll, pitch and yaw)… The first reference frame or body reference frame 322 is defined by x (e.g. forward), y (e.g. left), and z (e.g. up) orthogonal axes centered on the vehicle body… The orientation of ego vehicle 130 may be specified using first rotational parameters, which define orientation of the body reference frame 322 relative to the ENU reference frame 332.) In regards to claim 7, Kim-Xu teaches the method of claim 4, wherein the first position-orientation information includes at least the rotation of the body frame of the vehicle relative to the first global coordinate reference frame. (See Kim fig. 3C and paragraph 46-47, The first rotational parameters that describe the rotation of body reference frame 322 relative to the ENU reference frame 332 may be represented, for example, by R.sub.nb 335, where R.sub.nb=[r.sub.1 r.sub.2 r.sub.3], where r.sub.1, r.sub.2, and r.sub.3 are each column vectors (e.g. 3 rows and 1 column) describing rotations of the x, y, and z axes relative to the E, N, and U axes, respectively.) In regards to claim 8, Kim-Xu teaches the method of claim 7, wherein the first position-orientation information further includes: a covariance of an axis angle representation of the rotation of the body frame of the vehicle relative to the first global coordinate reference frame, or one or more Euler angles corresponding to the rotation of the body frame of the vehicle relative to the first global coordinate reference frame, or a variance of the one or more Euler angles, or any combination thereof. (See fig. 3C, paragraphs 46-47, 19, 6 Degrees of Freedom (6DoF) pose refers to three translation components (e.g. given by X, Y, and Z coordinates) and three angular components (e.g. roll, pitch and yaw)… The first reference frame or body reference frame 322 is defined by x (e.g. forward), y (e.g. left), and z (e.g. up) orthogonal axes centered on the vehicle body… The orientation of ego vehicle 130 may be specified using first rotational parameters, which define orientation of the body reference frame 322 relative to the ENU reference frame 332.) In regards to claim 12, Kim-Xu teaches the method of claim 1, wherein the global position message further comprises a trace identifier, and wherein the trace identifier identifies the vehicle, a vehicle type associated with the vehicle, or a sensor type associated with the vehicle. (See Xu paragraph 45, The OEM 114 may include a server and a database configured to receive probe data from vehicles or devices corresponding to the OEM. For example, if the OEM is a brand of automobile, each of that manufacturer’s automobiles (e.g., mobile device 104) may provide probe data to the OEM 114 for processing. That probe data may be encrypted with a proprietary encryption or encryption that is unique to the OEM.) Claims 9-11 are rejected under 35 U.S.C. 103 as being unpatentable over KIM; Muryong et al. (US 20200217972 A1) in view of XU; Jingwei (US 20230192089 A1), and further in view of Pertsel; Shimon (US 11586843 B1) In regards to claim 9, Kim-Xu teaches the method of claim 1, further comprising: determining second position-orientation information that is associated with the body frame of the vehicle at a second time and is relative to a second global coordinate reference frame; (See Kim paragraph 65. Also see Kim paragraphs 45-47 and fig. 3C), global coordinate reference frame)) Kim-Xu does not specifically teach, generating a relative position message that comprises differential information between the first position-orientation information and the second position-orientation information, a second timestamp that is based on the second time, and information sufficient to determine the first time; and transmitting the relative position message to the network component. Pertsel further teaches, generating a relative position message that comprises differential information between the first position-orientation information and the second position-orientation information, a second timestamp that is based on the second time, and information sufficient to determine the first time; and transmitting the relative position message to the network component. (See fig. 11, step 772, col. 46, lines 55-62, in the step 772, the communication module 110 may transmit the annotated video frames to the remote device 710. Next, the method 750 may move to the step 774. The step 774 may end the method 750. The BRI of the term “message” encompasses video frames. Also See col. 43, lines 35 thru col. 44, line 4, the annotated video frame 702 may be generated from the video frame 700i. While one example video frame 700i is shown being used to generate one annotated video frame 702, any number of the video frames 700a-7001 that show the obstacle 280a may be annotated. See figs. 5, 7-8, 10, each of the video frames represent claimed differential information showing change in orientation of the vehicle at different times in global coordinate reference frame (e.g. roll, pitch, yaw). The annotated video frame reads on the claimed “timestamp” as captured frame represents specific time as well when compared with other frames) Therefore, it would have been obvious by one of ordinary skilled in the art before the time the invention was effectively filed to modify the method of Kim-Xu to further comprise method taught by Pertsel because improvement in detection of obstacles can be made, thus improving overall safety (Pertsel col. 33, lines 28-30, col. 3, lines 14-16). In regards to claim 10, Kim-Xu-Pertsel teaches the method of claim 9, wherein the information sufficient to determine the first time comprises the first timestamp that is based on the first time or a delta between the first time and the second time. (See Xu paragraph 58, probe data is obtained from vehicles traveling within a road network. The probe data includes a variety of data generated by a vehicle, including at least a location and a timestamp. Also see Pertsel fig. 10, frames 700a, 700i, represent different timestamp) In regards to claim 11, Kim-Xu-Pertsel teaches the method of claim 9, wherein the differential information comprises: a translation differential between translations of the body frame of the vehicle relative to the first global coordinate reference frame and the second global coordinate reference frame, respectively, at the first time and the second time, respectively, or a covariance differential between covariances of translations of the body frame of the vehicle relative to the first global coordinate reference frame and the second global coordinate reference frame, respectively, at the first time and the second time, respectively, or a Euler angle differential between Euler angles corresponding to the rotation of the body frame of the vehicle relative to the first global coordinate reference frame and the second global coordinate reference frame, respectively, at the first time and the second time, respectively, or a Euler angle variance differential between Euler angle variances corresponding to the rotation of the body frame of the vehicle relative to the first global coordinate reference frame and the second global coordinate reference frame, respectively, at the first time and the second time, respectively, or a yaw angle differential between yaw angles corresponding to the rotation of the body frame of the vehicle relative to the first global coordinate reference frame and the second global coordinate reference frame, respectively, at the first time and the second time, respectively, or a yaw angle variance differential between yaw angle variances corresponding to the rotation of the body frame of the vehicle relative to the first global coordinate reference frame and the second global coordinate reference frame, respectively, at the first time and the second time, respectively, or any combination thereof. (See Pertsel fig. 11, step 764, col. 29, line 34, col. 30, line 16, In one example, the change in orientation may be the pitch rotation 394 of the ego vehicle 50. In another example, the change in orientation may be the roll rotation 392. In yet another example, the change in orientation may be moving (e.g., acceleration) along the vertical axis 386. The pitch, roll, yaw represents Euler angle. Also see Kim paragraphs 45-47 and fig. 3C), global coordinate reference frame)) Claims 13-19 and 26-30 are rejected under 35 U.S.C. 103 as being unpatentable over Pertsel; Shimon (US 11586843 B1) in view of Kim et al. (US 20200217972 A1), in view of Yang; Mengda et al. (US 11430182 B1) In regards to claim 13, Pertsel teaches, A method of operating a network component, comprising: receiving a global position message from a user equipment (UE) associated with a vehicle, (See fig. 11, step 772, col. 46, lines 55-62, in the step 772, the communication module 110 may transmit the annotated video frames to the remote device 710. Next, the method 750 may move to the step 774. The step 774 may end the method 750. The BRI of the term “message” encompasses video frames) the global position message comprising first position-orientation information, the first position-orientation information associated with a body frame of the vehicle at a first time and is relative to a first global coordinate reference frame; (See col. 43, lines 35 thru col. 44, line 4, the annotated video frame 702 may be generated from the video frame 700i. While one example video frame 700i is shown being used to generate one annotated video frame 702, any number of the video frames 700a-7001 that show the obstacle 280a may be annotated. See figs. 5, 7-8, 10, each of the video frames represent claimed differential information showing change in orientation of the vehicle at different times in global coordinate reference frame (e.g. roll, pitch, yaw). The annotated video frame reads on the claimed “timestamp” as captured frame represents specific time as well when compared with other frames) Pertsel does not specifically teach, the first position-orientation information…relative to a first global coordinate reference frame Kim further discloses, the first position-orientation information…relative to a first global coordinate reference frame (See fig. 3C, paragraphs 45-47, the first rotational parameters (e.g. comprised in the first 6-DOF pose) may describe the orientation of a first reference frame (e.g. a body reference frame centered on the ego vehicle's body) relative to the (second) reference frame used to specify the 6-DOF pose. ) Therefore, it would have been obvious by one of ordinary skilled in the art before the time the invention was effectively filed to modify the method of Pertsel to further comprise method taught by Kim because Kim provides a well-known and predictable technique for expressing a vehicle’s orientation relative to a global coordinate system, thereby improving the accuracy and consistency of orientation determination in Pertsel. A person of ordinary skilled in the art would recognize that referencing the body-frame rotation to a global coordinate frame, rather than relying solely on relative or local changes, allows for more robust and standardized orientation tracking. Pertsel-Kim does not specifically teach, updating a map based on the global position message. Yang further discloses, updating a map based on the global position message. (See col. 4, lines 5-20, The computing system 121 may be integrated into the vehicle 101 or may be remote from the vehicle 101, but still receiving sensor data from the vehicle 101 and/or other vehicles…col. 4, lines 61-65, FIG. 1B illustrates a hybrid data flow and block diagram that illustrates a process of acquiring sensor data, determining whether an existing map requires updating, and updating the existing map in response to such determination, in accordance with an example embodiment…col. 3, lines 30-40, Using the updated HD maps, a processor on the vehicles can detect or determine a presence of different objects or entities in the surrounding environment to assist the vehicles, or another vehicle, in performing navigation tasks such as vehicle acceleration and deceleration, vehicle braking, vehicle lane changing, adaptive cruise control, blind spot detection, rear-end radar for collision warning or collision avoidance, park assisting, cross-traffic monitoring, emergency braking, and automated distance control.) Therefore, it would have been obvious by one of ordinary skilled in the art before the time the invention was effectively filed to modify the method of Pertsel-Kim to further comprise method taught by Yang because Using the updated HD maps, a processor on the vehicles can detect or determine a presence of different objects or entities in the surrounding environment to assist the vehicles, or another vehicle, in performing navigation tasks (col. 3, lines 30-40). This operation can further improve safety. In regards to claim 14, Pertsel-Kim-Yang teaches the method of claim 13, wherein the first global coordinate reference frame is an Earth-centered Earth-fixed (ECEF) frame or a fixed East-North-Up (ENU) frame. (See Kim paragraph 45, ENU or ECEF) In regards to claim 15, Pertsel-Kim-Yang teaches the method of claim 13, wherein the body frame of the vehicle is a rear axle of the vehicle. (See Pertsel fig. 5, and associated column and lines vehicle contains rear axle inherently. Also see Kim fig. 3C) In regards to claim 16, Pertsel-Kim-Yang teaches the method of claim 13, wherein the first position-orientation information includes: a translation of the body frame of the vehicle relative to the global coordinate reference frame, or a rotation of the body frame of the vehicle relative to the global coordinate reference frame, or a combination thereof. (See Kim paragraphs 45-47 and fig. 3C) In regards to claim 17, Pertsel-Kim-Yang teaches the method of claim 13, further comprising: receiving a relative position message that comprises differential information between the first position-orientation information and second position-orientation information that is associated with the body frame of the vehicle at a second time and is relative to a second global coordinate reference frame, a second timestamp that is based on the second time, and information sufficient to determine the first time. (See Pertsel fig. 11, step 772, col. 46, lines 55-62, in the step 772, the communication module 110 may transmit the annotated video frames to the remote device 710. Next, the method 750 may move to the step 774. The step 774 may end the method 750. The BRI of the term “message” encompasses video frames. Also See col. 43, lines 35 thru col. 44, line 4, the annotated video frame 702 may be generated from the video frame 700i. While one example video frame 700i is shown being used to generate one annotated video frame 702, any number of the video frames 700a-7001 that show the obstacle 280a may be annotated. See figs. 5, 7-8, 10, each of the video frames represent claimed differential information showing change in orientation of the vehicle at different times in global coordinate reference frame (e.g. roll, pitch, yaw). The annotated video frame reads on the claimed “timestamp” as captured frame represents specific time as well when compared with other frames. Also see Kim paragraphs 45-47 and fig. 3C), global coordinate reference frame) In regards to claim 18, Pertsel-Kim-Yang teaches the method of claim 17, wherein the differential information comprises: a translation differential between translations of the body frame of the vehicle relative to the first global coordinate reference frame and the second global coordinate reference frame, respectively, at the first time and the second time, respectively, or a covariance differential between covariances of translations of the body frame of the vehicle relative to the first global coordinate reference frame and the second global coordinate reference frame, respectively, at the first time and the second time, respectively, or a Euler angle differential between Euler angles corresponding to the rotation of the body frame of the vehicle relative to the first global coordinate reference frame and the second global coordinate reference frame, respectively, at the first time and the second time, respectively, or a Euler angle variance differential between Euler angle variances corresponding to the rotation of the body frame of the vehicle relative to the first global coordinate reference frame and the second global coordinate reference frame, respectively, at the first time and the second time, respectively, or a yaw angle differential between yaw angles corresponding to the rotation of the body frame of the vehicle relative to the first global coordinate reference frame and the second global coordinate reference frame, respectively, at the first time and the second time, respectively, or a yaw angle variance differential between yaw angle variances corresponding to the rotation of the body frame of the vehicle relative to the first global coordinate reference frame and the second global coordinate reference frame, respectively, at the first time and the second time, respectively, or any combination thereof. (See Pertsel fig. 11, step 764, col. 29, line 34, col. 30, line 16, In one example, the change in orientation may be the pitch rotation 394 of the ego vehicle 50. In another example, the change in orientation may be the roll rotation 392. In yet another example, the change in orientation may be moving (e.g., acceleration) along the vertical axis 386. The pitch, roll, yaw represents Euler angle. Also see Kim paragraphs 45-47 and fig. 3C), global coordinate reference frame)) In regards to claim 19, Pertsel-Kim-Yang teaches the method of claim 13, wherein the global position message further comprises a trace identifier, and wherein the trace identifier identifies the vehicle, a vehicle type associated with the vehicle, or a sensor type associated with the vehicle. (See Pertsel fig. 10, col. 44, lines 23-35, ‘truck’ identifier) In regards to claim 26, Pertsel teaches, A method of operating a network component, comprising: receiving a relative position message from a user equipment (UE) associated with a vehicle, the relative position message comprising: (See fig. 11, step 772, col. 46, lines 55-62, in the step 772, the communication module 110 may transmit the annotated video frames to the remote device 710. Next, the method 750 may move to the step 774. The step 774 may end the method 750. The BRI of the term “message” encompasses video frames) differential information between first position-orientation information that is associated with a body frame of the vehicle at a first time and is relative to a first global coordinate reference frame and second position-orientation information that is associated with the body frame of the vehicle at a second time and is relative to a second global coordinate reference frame, information sufficient to determine the first time, and a second timestamp that is based on the second time; and (See col. 43, lines 35 thru col. 44, line 4, the annotated video frame 702 may be generated from the video frame 700i. While one example video frame 700i is shown being used to generate one annotated video frame 702, any number of the video frames 700a-7001 that show the obstacle 280a may be annotated. See figs. 5, 7-8, 10, each of the video frames represent claimed differential information showing change in orientation of the vehicle at different times in global coordinate reference frame (e.g. roll, pitch, yaw). The annotated video frame reads on the claimed “timestamp” as captured frame represents specific time as well when compared with other frames) Pertsel does not specifically teach, the first/second position-orientation information…relative to a first/second global coordinate reference frame Kim further discloses, the first/second position-orientation information…relative to a first/second global coordinate reference frame (See fig. 3C, paragraphs 45-47, the first rotational parameters (e.g. comprised in the first 6-DOF pose) may describe the orientation of a first reference frame (e.g. a body reference frame centered on the ego vehicle's body) relative to the (second) reference frame used to specify the 6-DOF pose. ) Therefore, it would have been obvious by one of ordinary skilled in the art before the time the invention was effectively filed to modify the method of Pertsel to further comprise method taught by Kim because Kim provides a well-known and predictable technique for expressing a vehicle’s orientation relative to a global coordinate system, thereby improving the accuracy and consistency of orientation determination in Pertsel. A person of ordinary skilled in the art would recognize that referencing the body-frame rotation to a global coordinate frame, rather than relying solely on relative or local changes, allows for more robust and standardized orientation tracking. Pertsel-Kim does not specifically teach, updating a map based on the relative position message. Yang further discloses, updating a map based on the relative position message. (See col. 4, lines 5-20, The computing system 121 may be integrated into the vehicle 101 or may be remote from the vehicle 101, but still receiving sensor data from the vehicle 101 and/or other vehicles…col. 4, lines 61-65, FIG. 1B illustrates a hybrid data flow and block diagram that illustrates a process of acquiring sensor data, determining whether an existing map requires updating, and updating the existing map in response to such determination, in accordance with an example embodiment…col. 3, lines 30-40, Using the updated HD maps, a processor on the vehicles can detect or determine a presence of different objects or entities in the surrounding environment to assist the vehicles, or another vehicle, in performing navigation tasks such as vehicle acceleration and deceleration, vehicle braking, vehicle lane changing, adaptive cruise control, blind spot detection, rear-end radar for collision warning or collision avoidance, park assisting, cross-traffic monitoring, emergency braking, and automated distance control.) Therefore, it would have been obvious by one of ordinary skilled in the art before the time the invention was effectively filed to modify the method of Pertsel-Kim to further comprise method taught by Yang because Using the updated HD maps, a processor on the vehicles can detect or determine a presence of different objects or entities in the surrounding environment to assist the vehicles, or another vehicle, in performing navigation tasks (col. 3, lines 30-40). This operation can further improve safety. In regards to claim 27, Pertsel-Kim-Yang teaches the method of claim 26, wherein the differential information comprises: a translation differential between translations of the body frame of the vehicle relative to the first global coordinate reference frame and the second global coordinate reference frame, respectively, at the first time and the second time, respectively, or a covariance differential between covariances of translations of the body frame of the vehicle relative to the first global coordinate reference frame and the second global coordinate reference frame, respectively, at the first time and the second time, respectively, or a Euler angle differential between Euler angles corresponding to the rotation of the body frame of the vehicle relative to the first global coordinate reference frame and the second global coordinate reference frame, respectively, at the first time and the second time, respectively, or a Euler angle variance differential between Euler angle variances corresponding to the rotation of the body frame of the vehicle relative to the first global coordinate reference frame and the second global coordinate reference frame, respectively, at the first time and the second time, respectively, or a yaw angle differential between yaw angles corresponding to the rotation of the body frame of the vehicle relative to the first global coordinate reference frame and the second global coordinate reference frame, respectively, at the first time and the second time, respectively, or a yaw angle variance differential between yaw angle variances corresponding to the rotation of the body frame of the vehicle relative to the first global coordinate reference frame and the second global coordinate reference frame, respectively, at the first time and the second time, respectively, or any combination thereof. (See Pertsel fig. 11, step 764, col. 29, line 34, col. 30, line 16, In one example, the change in orientation may be the pitch rotation 394 of the ego vehicle 50. In another example, the change in orientation may be the roll rotation 392. In yet another example, the change in orientation may be moving (e.g., acceleration) along the vertical axis 386. The pitch, roll, yaw represents Euler angle. Also see Kim paragraphs 45-47 and fig. 3C), global coordinate reference frame)) In regards to claim 28, Pertsel-Kim-Yang teaches the method of claim 26, wherein the first global coordinate reference frame and the second global coordinate reference frames comprise Earth-centered Earth-fixed (ECEF) frames or fixed East-North-Up (ENU) frames. (See Kim paragraph 45, ENU or ECEF) In regards to claim 29, Pertsel-Kim-Yang teaches the method of claim 26, wherein the body frame of the vehicle is a rear axle of the vehicle. (See Pertsel fig. 5, and associated column and lines vehicle contains rear axle inherently. Also see Kim fig. 3C) In regards to claim 30, Pertsel-Kim-Yang teaches the method of claim 26, wherein the relative position message further comprises a trace identifier, and wherein the trace identifier identifies the vehicle, a vehicle type associated with the vehicle, or a sensor type associated with the vehicle. (See Pertsel fig. 10, col. 44, lines 23-35, ‘truck’ identifier) Claims 20-25 are rejected under 35 U.S.C. 103 as being unpatentable over Pertsel; Shimon (US 11586843 B1) in view of Kim et al. (US 20200217972 A1) In regards to claim 20, Pertsel teaches, A method of operating a user equipment (UE) associated with a vehicle, comprising: (See abstract) determining first position-orientation information that is associated with a body frame of the vehicle at a first time and is relative to a first global coordinate reference frame; determining second position-orientation information that is associated with the body frame of the vehicle at a second time subsequent to the first time and is relative to a second global coordinate reference frame; determining differential information between the first position-orientation information and the second position-orientation information; (See fig. 7, steps 762-764, fig. 5, col. 29, lines 34 thru col. 30, line 16, The apparatus 100 may be configured to detect a change in orientation of the ego vehicle 50. The change in orientation of the ego vehicle 50 detected may correspond to driving over the obstacles 280a-280b (or other obstacles). In one example, the change in orientation may be the pitch rotation 394 of the ego vehicle 50. In another example, the change in orientation may be the roll rotation 392. In yet another example, the change in orientation may be moving (e.g., acceleration) along the vertical axis 386. ) generating a relative position message that comprises the differential information, information sufficient to determine the first time, and a second timestamp that is based on the second time; and transmitting the relative position message to a network component. (See fig. 11, step 772, col. 46, lines 55-62, in the step 772, the communication module 110 may transmit the annotated video frames to the remote device 710. Next, the method 750 may move to the step 774. The step 774 may end the method 750. The BRI of the term “message” encompasses video frames. Also See col. 43, lines 35 thru col. 44, line 4, the annotated video frame 702 may be generated fro
Read full office action

Prosecution Timeline

Jun 01, 2023
Application Filed
Oct 08, 2025
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597247
UNDERWATER DEVICE FOR ACQUIRING IMAGES OF A WATER BOTTOM
2y 5m to grant Granted Apr 07, 2026
Patent 12597300
INTEGRATED VEHICLE HEALTH MANAGEMENT SYSTEMS AND METHODS USING AN ENHANCED FAULT MODEL FOR A DIAGNOSTIC REASONER
2y 5m to grant Granted Apr 07, 2026
Patent 12596373
SYSTEM AND METHOD FOR EVALUATING THE PERFORMANCE OF A VEHICLE OPERATED BY A DRIVING AUTOMATION SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12583540
A METHOD FOR CONTROLLING ASSEMBLY OF A VEHICLE FROM A SET OF MODULES, A CONTROL DEVICE, A SYSTEM, A VEHICLE, A COMPUTER PROGRAM AND A COMPUTER-READABLE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12548456
Methods and Apparatus for Enhancing Unmanned Aerial Vehicle Management Using a Wireless Network
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
74%
Grant Probability
99%
With Interview (+26.1%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 462 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month