Prosecution Insights
Last updated: April 19, 2026
Application No. 18/468,628

FIELD RELATED DRIVING AND EXTERNAL CONTENT

Non-Final OA §101§103§112
Filed
Sep 15, 2023
Examiner
ARTIMEZ, DANA FERREN
Art Unit
3667
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Autobrains Technologies Ltd.
OA Round
1 (Non-Final)
58%
Grant Probability
Moderate
1-2
OA Rounds
3y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
46 granted / 80 resolved
+5.5% vs TC avg
Strong +44% interview lift
Without
With
+43.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
42 currently pending
Career history
122
Total Applications
across all art units

Statute-Specific Performance

§101
19.0%
-21.0% vs TC avg
§103
46.2%
+6.2% vs TC avg
§102
7.3%
-32.7% vs TC avg
§112
24.6%
-15.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 80 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This is a Non-Final rejection on the merits of this application. Claims 1-20 are currently pending, as discussed below. Examiner Notes that the fundamentals of the rejections are based on the broadest reasonable interpretation of the claim language. Applicant is kindly invited to consider the reference as a whole. References are to be interpreted as by one of ordinary skill in the art rather than as by a novice. See MPEP 2141. Therefore, the relevant inquiry when interpreting a reference is not what the reference expressly discloses on its face but what the reference would teach or suggest to one of ordinary skill in the art. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1, 10 and 12 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Regarding claim 1 (similarly claim 12), Applicant has apparently not described, in the specification, in sufficient details, by what algorithm(s) or what steps/procedure, he/she determined, “estimating, by using a neural network, one or more virtual fields…based on one or more virtual fields, a virtual force for use in applying a driving related operation … obtaining, estimating and determining is impacted by the content”. The specification does not adequately describe how are “virtual force” and/or “virtual field” calculated or how they interact with a vehicle control system for applying driving related operation based on content received. Why the specification (published [0219-0278]) discloses that certain steps may be selectively executed based on the received content via V2X signal (including varying neural network connectivity, feature vector resolutions, and model size), there is insufficient description regarding the claimed estimating and/or determining of “virtual fields” and “virtual forces”. Further, while the specification implies that content enables longer response times or reduces peak resource usage, there is no explicit disclosure connecting these modifications to a quantified force model associated with a physical model as recited in the claim; the specification does not provide sufficient details or examples showing how the content modifies or interacts with the neural network or the vehicle operation/behavior. Lastly, the specification also fails to describe the algorithm(s), structure or what training is done by the neural network for estimating/determining the virtual fields and/or forces. See MPEP 2161.01, I. and LizardTech Inc. v. Earth Resource Mapping Inc., 424 F.3d 1336, 1345 (Fed. Cir. 2005) cited therein ("Whether the flaw in the specification is regarded as a failure to demonstrate that the applicant possessed the full scope of the invention recited in [the claim] or a failure to enable the full breadth of that claim, the specification provides inadequate support for the claim under [§ 112(a)]"). Accordingly, the Examiner believes that Applicant has not demonstrated to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Regarding claim 10, Applicant has apparently not described, in the specification, in sufficient details, by what algorithm(s) or what steps/procedure, he/she determined, “spatial relationships between the information source, the specified object and the vehicle, such that the virtual force is determined contingent on the determined spatial relationship between the information source, the specified object and the vehicle” because the specification fails to teach or show any algorithm(s)/structures, or working examples on how the system derives any or all spatial relationships between an external information source, an object, and the vehicle in a way that can be subsequently used in calculating/determining a virtual force based on the spatial relationships because the specification (published [0257-0259]) merely repeats the claim language without providing any details/examples on how the applicant determines “spatial relationships between the information source, the specified object and the vehicle, such that the virtual force is determined contingent on the determined spatial relationship between the information source, the specified object and the vehicle”. See MPEP 2161.01, I. and LizardTech Inc. v. Earth Resource Mapping Inc., 424 F.3d 1336, 1345 (Fed. Cir. 2005) cited therein ("Whether the flaw in the specification is regarded as a failure to demonstrate that the applicant possessed the full scope of the invention recited in [the claim] or a failure to enable the full breadth of that claim, the specification provides inadequate support for the claim under [§ 112(a)]"). Accordingly, the Examiner believes that Applicant has not demonstrated to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. The dependent claims are also rejected under 112 first paragraph by the fact that they are dependent upon the rejected independent claims. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1, 7, 10-12 and 18 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 (similarly claim 12) recites limitation “estimating, by using a neural network, one or more virtual fields…based on one or more virtual fields, a virtual force for use in applying a driving related operation … obtaining, estimating and determining is impacted by the content” is indefinite and not reasonably certain, with indeterminate metes and bounds, from the teachings of the specification, of any or all “physical model” may be covered by the claim (in particular, what model and how is defined/calculated and related to either virtual fields and virtual forces) and what does “physical model” represent/cover, and further, how are virtual fields related to a virtual force and associated with a physical model? Hence, this claim limitation renders the claim to be indefinite. Claim 7 (similarly claim 18) recites limitation “not-sensed object that is not sensed by the vehicle” is indefinite because it is unclear and confusing to the Examiner whether this is refer to an object that cannot be sensed at all or just because the object is out of vehicle’s sensing range, hence this claim limitation renders the claim to be indefinite. Claim 10 recites limitation “determining spatial relationships between the information source, the specified object and the vehicle, such that the virtual force is determined contingent on the determined spatial relationship between the information source, the specified object and the vehicle” is indefinite and not reasonably certain, with indeterminate metes and bounds, from the teachings of the specification, whether spatial relationships refers to absolute position, relative bearing, angle of approach, time-to-collisions, or other geometric measures. Further, the claim recites that “the virtual force is determined contingent on the determined spatial relationships between the information source, the specified object and the vehicle” but the specification does not provide sufficient details and/or working examples on how and when such contingent determination process would or would not occur. Hence, this claim limitation renders the claim to be indefinite. Claim 11 recites limitation “at least one step of the obtaining, the estimating and the determining is selectively executed based on the content” is indefinite because it is unclear and confusing to the examiner what is this content (e.g. what kind of content? Position, type of object, priority, velocity, static, dynamic or something else?) and further, the claim fails to specify under what conditions any and/or all steps of obtaining, estimating and determining are executed or not executed, and lastly, what properties of “the content” trigger any or all selective executions of the steps. Hence, this claim limitation renders the claim to be indefinite. The dependent claims are also rejected under 112 second paragraph by the fact that they are dependent upon the rejected independent claims. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to abstract idea without significantly more. 101 Analysis – Step 1 – YES Claim 1 is directed to a method, and claim 12 is directed to a non-transitory storage medium. Therefore, claims 1 and 12 are within at least one of the four statutory categories. 101 Analysis – Step 2A, Prong I Regarding Prong I of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether they recite subject matter that falls within one of the follow groups of abstract ideas: a) mathematical concepts, b) certain methods of organizing human activity, and/or c) mental processes. Independent claim 1 includes limitations that recite an abstract idea (emphasized below) and will be used as a representative claim for the remainder of the 101 rejection. The other analogous claim(s) 12 is rejected for the same reasons as the representative claim 1 as discussed here. Claim 1 recites: A method that is computer implemented and for field related driving, the method comprises: receiving content from an information source located outside of a vehicle; obtaining object information regarding one or more objects located within an environment of the vehicle; estimating, by using a neural network (NN), and based on the object information, one or more virtual fields of the one or more objects; and determining, based on the one or more virtual fields, a virtual force for use in applying a driving related operation of the vehicle, wherein the virtual force is associated with a physical model and representing an impact of the one or more objects on a behavior of the vehicle; wherein at least one step of the obtaining, the estimating and the determining is impacted by the content. The examiner submits that the foregoing bolded limitation(s) constitute a “mental process” because under its broadest reasonable interpretation, the claim covers performance of the limitation in the human mind. For example, “estimating…one or more virtual fields of the one or more objects” and “determining…a behavior of the vehicle” limitations in the context of this claim encompasses an rider/user of a vehicle observing the vehicle’s surrounding (e.g. analyzing positions of other road users/objects) and forming a simple judgement either mentally or using pen and paper (e.g. by estimating/determining their influences of his/her vehicle); accordingly, the claim recites an abstract idea, as recited in published specification [0047] reproduced below: We suspect that humans employ an internal representation of surrounding objects in the form of virtual force fields that immediately imply action, thus circumventing the need for kinematics estimation. Consider a scenario in which the ego vehicle drives in one lane and a vehicle diagonally in front in an adjacent lane starts swerving into the ego lane. The human response to brake or veer off would be immediate and instinctive and can be experienced as a virtual force repelling the ego from the swerving vehicle. This virtual force representation is learned and associated with the specific road object. Examiner would also note MPEP 2106.04(a)(2)(III): The courts consider a mental process (thinking) that "can be performed in the human mind, or by a human using a pen and paper" to be an abstract idea. CyberSource Corp. v. Retail Decisions, Inc., 654 F.3d 1366, 1372, 99 USPQ2d 1690, 1695 (Fed. Cir. 2011). As the Federal Circuit explained, "methods which can be performed mentally, or which are the equivalent of human mental work, are unpatentable abstract ideas the ‘basic tools of scientific and technological work’ that are open to all.’" 654 F.3d at 1371, 99 USPQ2d at 1694 (citing Gottschalk v. Benson, 409 U.S. 63, 175 USPQ 673 (1972)). See also Mayo Collaborative Servs. v. Prometheus Labs. Inc., 566 U.S. 66, 71, 101 USPQ2d 1961, 1965 ("‘[M]ental processes[] and abstract intellectual concepts are not patentable, as they are the basic tools of scientific and technological work’" (quoting Benson, 409 U.S. at 67, 175 USPQ at 675)); Parker v. Flook, 437 U.S. 584, 589, 198 USPQ 193, 197 (1978) (same). Accordingly, the "mental processes" abstract idea grouping is defined as concepts performed in the human mind, and examples of mental processes include observations, evaluations, judgments, and opinions. Here, the determination is a form of making evaluation and judgement based on observation (driver behavior). Accordingly, the claim recites at least one abstract idea. 101 Analysis – Step 2A, Prong II Regarding Prong II of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether the claim, as a whole, integrates the abstract into a practical application. As noted in the 2019 PEG, it must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception. The courts have indicated that additional elements merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application.” In the present case, the additional limitations beyond the above-noted abstract idea are as follows (where the underlined portions are the “additional limitations” while the bolded portions continue to represent the “abstract idea”): A method that is computer implemented and for field related driving, the method comprises: receiving content from an information source located outside of a vehicle; obtaining object information regarding one or more objects located within an environment of the vehicle; estimating, by using a neural network (NN), and based on the object information, one or more virtual fields of the one or more objects; and determining, based on the one or more virtual fields, a virtual force for use in applying a driving related operation of the vehicle, wherein the virtual force is associated with a physical model and representing an impact of the one or more objects on a behavior of the vehicle; wherein at least one step of the obtaining, the estimating and the determining is impacted by the content. For the following reason(s), the examiner submits that the above identified limitations do not integrate the above-noted abstract idea into a practical application. Regarding the additional limitation of “receiving content…” and “obtaining object information…”, the examiner submits that these limitation are insignificant extra-solution activities. In particular, the receiving and obtaining step are recited at a high level of generality (i.e. as a general means of acquiring data) and amounts to mere data gathering which is a form of insignificant extra-solution activity. Lastly, the recitation of “neural network” merely describes how to generally “apply” the otherwise abstract ideas and/or additional limitations in a generic or general-purpose computer environment, where processor is recited as generic processor performing a generic computer function of acquiring data. This generic processor limitation is no more than mere instructions to apply the exception using a generic computer component and merely automates the steps. Thus, taken alone, the additional elements do not integrate the abstract idea into a practical application. Further, looking at the additional limitation(s) as an ordered combination or as a whole, the limitation(s) add nothing that is not already present when looking at the elements taken individually. For instance, there is no indication that the additional elements, when considered as a whole, reflect an improvement in the functioning of a computer or an improvement to another technology or technical field, apply or use the above-noted judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition, implement/use the above-noted judicial exception with a particular machine or manufacture that is integral to the claim, effect a transformation or reduction of a particular article to a different state or thing, or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is not more than a drafting effort designed to monopolize the exception (MPEP § 2106.05). Accordingly, the additional limitation(s) do/does not integrate the abstract idea into a practical application because it does not impost any meaningful limits on practicing the abstract idea. 101 Analysis – Step 2B Regarding Step 2B of the 2019 PEG, representative independent claim 1 do not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for the same reasons to those discussed above with respect to determining that the claim does not integrate the abstract idea into a practical application. As discussed above with respect to integration of the abstract idea into a practical application, the additional limitations of “neural network”, the examiner submits that the processor is recited at a high-level of generality (i.e. as a generic computer component performing generic calculation) such that it amounts no more than mere instruction to apply the exception using a generic computer component. Generally applying an exception using a generic computer component cannot provide an inventive concept. And as discussed above, the additional limitations discussed above are insignificant extra-solutions activities. As explained, the additional elements are recited at a high level of generality to simply implement the abstract idea and are not themselves being technologically improved. See, e.g., MPEP §2106.05; Alice Corp. v. CLS Bank, 573 U.S., 208,223 (“[T]he mere recitation of a generic computer cannot transform a patent-ineligible abstract idea into a patent-eligible invention”). Electric Power Group, LLC v, Alstom S.A., 830 F.3d 1350, 1354-55, 119 USPQ2d 1739, 1742 (Fed. Cir. 2016) (Selecting information for collection, analysis and display constitute insignificant extra-solution activity). Apple, Inc. v. Ameranth, Inc., 842 F.3d 1229, 1243-44, 120 USPQ2d 1844, 1855-57 (Fed. Cir. 2016)( Generating a second menu from a first menu and sending the second menu to another location as performed by generic computer components). Hence, the claims are not patent eligible. Dependent Claims Dependent claims 2-11 and 13-20 do not recite any further limitations that causes the claims to be patent eligible. Rather, the limitations of dependent claims are directed toward additional aspects of the judicial except and/or additional elements that do not integrate the judicial exception into a practical application. Therefore, dependent claims 2-11 and 13-20 are not patent eligible under the same rationale as provided for in the rejection of claim 1. As such, claims 1-20 are rejected under 35 USC § 101 as being drawn to an abstract idea without significant more, and thus are ineligible. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Kawahara et al. (US 2016/0272199 A1 hereinafter Kawahara) in view of Kozuka et al. (US 2017/0262750 A1 hereinafter Kozuka). Regarding claim 1 (similarly claim 12), Kawahara teaches A method that is computer implemented and for field related driving (see at least Abstract), the method comprises: receiving content from an information source located outside of a vehicle; (see at least Fig. 1-11 [0011, 0029-0102]: the communication unit performs road-to-vehicle communication between the host vehicle and a radio communication station on a road side or vehicle-to-vehicle communication between the host vehicle and a communicator mounted in a peripheral vehicle or communicates with a server via base station of a wide area network. The communication unit may receive weather information and traffic congestion information.) obtaining object information regarding one or more objects located within an environment of the vehicle; (see at least Fig. 1-11 [0029-0102]: the detection unit is configured to include sensors which detect the positions, the relative speeds, and the sizes of peripheral vehicles which travel in a lane which a host vehicle travels, and in adjacent lanes.) estimating, based on the object information, one or more virtual fields of the one or more objects; (see at least Fig. 1-11 [0029-0102]: the control unit assigns a potential field to each peripheral value in the bird’s eye view image and prepares a distribution illustrating a state in which the potential fields of the peripheral vehicles are integrally distributed across the entire region of the bird’s eye view image. The potential field conceptually represents the degree of psychological discomfort that is given to the driver of the host vehicle due to the presence of a peripheral vehicle, with the degree of psychological discomfort being associated with the position of the peripheral vehicle on the road.) and determining, based on the one or more virtual fields, a virtual force for use in applying a driving related operation of the vehicle, wherein the virtual force is associated with a physical model and representing an impact of the one or more objects on a behavior of the vehicle; (see at least Fig. 1-11 [0029-0102]: the control unit determines whether the position of the host vehicle has a high potential in the potential distribution. When the host vehicle is position in a low potential field, the control unit may command the engine control ECU of the vehicle to maintain a current speed. When it is determined that the host vehicle is position in a high potential field, the control unit determines whether the host vehicle is capable of escaping from a high potential field to a low potential field by changing the position of the host vehicle relative to the peripheral vehicles in the forward and rearward direction while maintaining a current travel lane or by changing a lane to an adjacent lane from the travel lane. The control unit commands the engine control or brake control ECU and/or steering angle control ECU to adjust the speed of the host vehicle and/or change lane such that the host vehicle enters the low potential field.) wherein at least one step of the obtaining, the estimating and the determining is impacted by the content. (see at least Fig. 1-11 [0029-0102]: the potential field vale may be set to be increased to the extent that the speed of the host vehicle or the speed of the peripheral vehicle is high. The value or range of a potential field may be adjusted according to the size of the peripheral vehicle and maybe adjusted according to road conditions such as width, the curve, the slope of a road. Under bad weather conditions such as rainy, stormy, or foggy weather, a potential field may be assigned to a wide range, or the value of the potential field may be set to be greater than normal conditions, based on weather information acquired by the communication unit. A speed, the size of a peripheral vehicle, road conditions, and weather conditions are capable of being reflected in the calculation of a potential field.) It may be alleged that Kawahara does not explicitly teach estimating, by using a neural network (NN), and based on the object information, one or more virtual fields of the one or more objects; Kozuka is directed to a risk prediction method capable of predicting risk area having a possibility of causing a dangerous situation for a running vehicle, Kozuka teaches estimating, by using a neural network (NN), and based on the object information, one or more virtual fields of the one or more objects; (see at least Fig. 3-4 [0060-0070]: the risk predictor acquires an input image taken by an in-vehicle camera, estimates the acquired input image (e.g. including a bus at rest and persons), by using the convolutional neural network, to generates an image a on which a degree of risk (likelihood) of each risk area estimated in the image is superimposed, and outputs the resultant image a as a predicted risk.) Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have modified Kawahara’s vehicle’s travel control system and device to incorporate the technique of estimating, by using a neural network (NN), and based on the object information, one or more virtual fields of the one or more objects as taught by Kozuka with reasonable expectation of success because CNN are excellent at extracting spatial features from visual data and doing so would help the vehicle better detect and localize road users to understand their positions relative to the host vehicle such that the CNN output can be utilized for path planning and driver assistance system for warning driver of risk zones and/or taking action for avoiding potential collision and thus improving roadway safety. Regarding claim 2 (similarly claim 13), the combination Kawahara in view of Kozuka teaches The method according to claim 1, Kawahara further teaches wherein the content is conveyed over a vehicle to everything (V2X) communication channel. (see at least Fig. 1-11 [0011, 0029-0102]: the communication unit performs road-to-vehicle communication between the host vehicle and a radio communication station on a road side or vehicle-to-vehicle communication between the host vehicle and a communicator mounted in a peripheral vehicle or communicates with a server via base station of a wide area network. The communication unit may receive weather information and traffic congestion information.) Regarding claim 3 (similarly claim 14), the combination Kawahara in view of Kozuka teaches The method according to claim 2, Kawahara further teaches wherein the V2X communication channel is at least one of a vehicle to vehicle communication channel, a vehicle to infrastructure communication channel, a vehicle to pedestrian communication channel; and a vehicle to network communication channel. (see at least Fig. 1-11 [0011, 0029-0102]: the communication unit performs road-to-vehicle communication between the host vehicle and a radio communication station on a road side or vehicle-to-vehicle communication between the host vehicle and a communicator mounted in a peripheral vehicle or communicates with a server via base station of a wide area network. The communication unit may receive weather information and traffic congestion information.) Regarding claim 4 (similarly claim 15), the combination Kawahara in view of Kozuka teaches The method according to claim 2, Kawahara further teaches communicating, over the V2X channel, at least one of the virtual force and the one or more virtual fields. (see at least [0011-0013, 0063-0102]: The server is capable of collecting the information regarding the plurality of vehicles traveling on the road, and generating the distribution of potential fields indicative of the degree of psychological pressure on the road based on the information regarding each of the vehicles. The items of information, which are used to control the travel states of the vehicles in order for the vehicles to travel avoiding high potential fields, are capable of being integrally determined based on the distribution of the potential fields across the road. Each of the vehicles is notified of the control information determined by the server, the travel state of each of the vehicles is controlled based on the notified control information, and thus psychological pressure given to a driver of each of the vehicles due to the peripheral vehicles is capable of being reduced.) Regarding claim 5 (similarly claim 16), the combination Kawahara in view of Kozuka teaches The method according to claim 1, Kawahara further teaches wherein the content is related to a specified object that is located at a region within the environment, wherein the impacting is related to the region. (see at least Fig. 1-11 [0029-0102]: the potential field vale may be set to be increased to the extent that the speed of the host vehicle or the speed of the peripheral vehicle is high. The value or range of a potential field may be adjusted according to the size of the peripheral vehicle and maybe adjusted according to road conditions such as width, the curve, the slope of a road. Under bad weather conditions such as rainy, stormy, or foggy weather, a potential field may be assigned to a wide range, or the value of the potential field may be set to be greater than normal conditions, based on weather information acquired by the communication unit. A speed, the size of a peripheral vehicle, road conditions, and weather conditions are capable of being reflected in the calculation of a potential field. The vehicles disposed in the bird's eye view image include not only the vehicles with the in-vehicle device which provide the information to the server, but also peripheral vehicles which are detected by the sensors of the vehicles with the in-vehicle device.) Regarding claim 6 (similarly claim 17), the combination Kawahara in view of Kozuka teaches The method according to claim 1, Kawahara further teaches wherein obtaining the object information is in accordance with the content. (see at least Fig. 1-11 [0029-0102]: The bird's eye view image illustrating positional relationships between the vehicles on the road, based on the information received from the vehicles, and road map information regarding the current location. The bird's eye view image has a composition in which the positional relationships between all of the vehicles traveling in the predetermined road section in the same direction are overlooked from above, and in this image, the vehicles are disposed at the corresponding positions in regions in which the lanes and the shape of the road are reproduced. The vehicles disposed in the bird's eye view image include not only the vehicles with the in-vehicle device which provide the information to the server , but also peripheral vehicles which are detected by the sensors of the vehicles with the in-vehicle device. The computational unit assigns a potential field to each peripheral vehicle in the bird's eye view image prepared, and prepares a potential distribution illustrating a state in which the potential fields of the vehicles are integrally distributed across the entire region of the bird's eye view image.) Regarding claim 7 (similarly claim 18), the combination Kawahara in view of Kozuka teaches The method according to claim 1, Kawahara further teaches wherein the content is related to an not-sensed object that is not sensed by the vehicle. (see at least Fig. 1-11 [0029-0102]: The bird's eye view image illustrating positional relationships between the vehicles on the road, based on the information received from the vehicles, and road map information regarding the current location. The bird's eye view image has a composition in which the positional relationships between all of the vehicles traveling in the predetermined road section in the same direction are overlooked from above, and in this image, the vehicles are disposed at the corresponding positions in regions in which the lanes and the shape of the road are reproduced. The vehicles disposed in the bird's eye view image include not only the vehicles with the in-vehicle device which provide the information to the server , but also peripheral vehicles which are detected by the sensors of the vehicles with the in-vehicle device. The computational unit assigns a potential field to each peripheral vehicle in the bird's eye view image prepared, and prepares a potential distribution illustrating a state in which the potential fields of the vehicles are integrally distributed across the entire region of the bird's eye view image.) Regarding claim 8 (similarly claim 19), the combination Kawahara in view of Kozuka teaches The method according to claim 7, Kawahara further teaches wherein the determining the virtual force comprises avoiding a direct impact of the not-sensed object on the vehicle. (see at least Fig. 1-11 [0029-0102]: the control unit determines whether the position of the host vehicle has a high potential in the potential distribution. When the host vehicle is position in a low potential field, the control unit may command the engine control ECU of the vehicle to maintain a current speed. When it is determined that the host vehicle is position in a high potential field, the control unit determines whether the host vehicle is capable of escaping from a high potential field to a low potential field by changing the position of the host vehicle relative to the peripheral vehicles in the forward and rearward direction while maintaining a current travel lane or by changing a lane to an adjacent lane from the travel lane. The control unit commands the engine control or brake control ECU and/or steering angle control ECU to adjust the speed of the host vehicle and/or change lane such that the host vehicle enters the low potential field.) Regarding claim 9 (similarly claim 20), the combination Kawahara in view of Kozuka teaches The method according to claim 1, Kawahara further teaches wherein the obtaining of the information comprises receiving at least a part of the object information from the information source. (see at least Fig. 1-11 [0029-0102]: The bird's eye view image illustrating positional relationships between the vehicles on the road, based on the information received from the vehicles, and road map information regarding the current location. The bird's eye view image has a composition in which the positional relationships between all of the vehicles traveling in the predetermined road section in the same direction are overlooked from above, and in this image, the vehicles are disposed at the corresponding positions in regions in which the lanes and the shape of the road are reproduced. The vehicles disposed in the bird's eye view image include not only the vehicles with the in-vehicle device which provide the information to the server , but also peripheral vehicles which are detected by the sensors of the vehicles with the in-vehicle device. The computational unit assigns a potential field to each peripheral vehicle in the bird's eye view image prepared, and prepares a potential distribution illustrating a state in which the potential fields of the vehicles are integrally distributed across the entire region of the bird's eye view image.) Regarding claim 10, the combination Kawahara in view of Kozuka teaches The method according to claim 1, further comprising Kawahara further teaches obtaining location information pertaining to a location of the information source; and determining spatial relationships between the information source, the specified object and the vehicle, such that the virtual force is determined contingent on the determined spatial relationship between the information source, the specified object and the vehicle. (see at least Fig. 1-11 [0029-0102]: The bird's eye view image illustrating positional relationships between the vehicles on the road, based on the information received from the vehicles, and road map information regarding the current location. The bird's eye view image has a composition in which the positional relationships between all of the vehicles traveling in the predetermined road section in the same direction are overlooked from above, and in this image, the vehicles are disposed at the corresponding positions in regions in which the lanes and the shape of the road are reproduced. The vehicles disposed in the bird's eye view image include not only the vehicles with the in-vehicle device which provide the information to the server , but also peripheral vehicles which are detected by the sensors of the vehicles with the in-vehicle device. The computational unit assigns a potential field to each peripheral vehicle in the bird's eye view image prepared, and prepares a potential distribution illustrating a state in which the potential fields of the vehicles are integrally distributed across the entire region of the bird's eye view image.) Regarding claim 11, the combination Kawahara in view of Kozuka teaches The method according to claim 1 Kawahara further teaches wherein the at least one step of the obtaining, the estimating and the determining is selectively executed based on the content. (see at least Fig. 1-11 [0029-0102]: the potential field vale may be set to be increased to the extent that the speed of the host vehicle or the speed of the peripheral vehicle is high. The value or range of a potential field may be adjusted according to the size of the peripheral vehicle and maybe adjusted according to road conditions such as width, the curve, the slope of a road. Under bad weather conditions such as rainy, stormy, or foggy weather, a potential field may be assigned to a wide range, or the value of the potential field may be set to be greater than normal conditions, based on weather information acquired by the communication unit. A speed, the size of a peripheral vehicle, road conditions, and weather conditions are capable of being reflected in the calculation of a potential field. The vehicles disposed in the bird's eye view image include not only the vehicles with the in-vehicle device which provide the information to the server, but also peripheral vehicles which are detected by the sensors of the vehicles with the in-vehicle device.) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DANA F ARTIMEZ whose telephone number is (571)272-3410. The examiner can normally be reached M-F: 9:00 am-3:30 pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Faris S. Almatrahi can be reached at (313) 446-4821. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DANA F ARTIMEZ/Examiner, Art Unit 3667 /FARIS S ALMATRAHI/Supervisory Patent Examiner, Art Unit 3667
Read full office action

Prosecution Timeline

Sep 15, 2023
Application Filed
Aug 01, 2025
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596371
SYSTEM AND METHOD FOR INTERCEPTION AND COUNTERING UNMANNED AERIAL VEHICLES (UAVS)
2y 5m to grant Granted Apr 07, 2026
Patent 12573078
METHOD AND APPARATUS FOR DETERMINING VEHICLE LOCATION BASED ON OPTICAL CAMERA COMMUNICATION
2y 5m to grant Granted Mar 10, 2026
Patent 12571646
Automated Discovery and Monitoring of Uncrewed Aerial Vehicle Ground-Support Infrastructure
2y 5m to grant Granted Mar 10, 2026
Patent 12560441
METHOD AND APPARATUS FOR OPTIMIZING A MULTI-STOP TOUR WITH FLEXIBLE MEETING LOCATIONS
2y 5m to grant Granted Feb 24, 2026
Patent 12560936
SYSTEMS AND METHODS FOR OBJECT DETECTION
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
58%
Grant Probability
99%
With Interview (+43.9%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 80 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month