Prosecution Insights
Last updated: April 19, 2026
Application No. 18/870,523

METHOD, SYSTEM AND COMPUTER PROGRAM PRODUCT FOR INTERACTIVE COMMUNICATION BETWEEN A MOVING OBJECT AND A USER

Non-Final OA §101§102§103§112
Filed
Nov 29, 2024
Examiner
GASCA ALVA JR, MOISES
Art Unit
3667
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Porsche Ebike Performance GmbH
OA Round
1 (Non-Final)
44%
Grant Probability
Moderate
1-2
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 44% of resolved cases
44%
Career Allow Rate
31 granted / 71 resolved
-8.3% vs TC avg
Strong +58% interview lift
Without
With
+57.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
25 currently pending
Career history
96
Total Applications
across all art units

Statute-Specific Performance

§101
24.6%
-15.4% vs TC avg
§103
47.4%
+7.4% vs TC avg
§102
6.2%
-33.8% vs TC avg
§112
21.7%
-18.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 71 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This is a non-final Office Action on the merits. Claims 1-15 are currently pending and are addressed below. Examiner Notes that the fundamentals of the rejections are based on the broadest reasonable interpretation of the claim language. Applicant is kindly invited to consider the reference as a whole. References are to be interpreted as by one of ordinary skill in the art rather than as by a novice. See MPEP 2141. Therefore, the relevant inquiry when interpreting a reference is not what the reference expressly discloses on its face but what the reference would teach or suggest to one of ordinary skill in the art. Priority Acknowledgment is made of applicant's claim of priority for foreign DE application DE 102022113992.1, filed on 06/02/2022. Information Disclosure Statement The information disclosure statement (IDS) submitted on 11/29/2024 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Drawings The drawings are objected to because FIG 1-3 do not have any text describing the components/ boxes, it would be appreciated if for clarity everything was labeled with a description. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Claim Objections Claim 3 is objected to because of the following informalities: “The method of in claim 1” should be changed to –The method of claim 1--. Appropriate correction is required. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “scenario module” in claim 1 and 9. “assessment module” in claim 1 and 9. “output module” in claim 1 and 9. “input module” in claim 9. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Upon reviewing the specification, the corresponding structure for the different modules are found, support for the scenario, assessment and output module are found as seen in [035] “In a further embodiment, the assessment module, the scenario module and the output module are integrated in a cloud computing infrastructure. In particular, a 5G mobile radio connection or 6G mobile radio connection may be used for the data connection of the sensor apparatus to the scenario module or the cloud computing infrastructure and for the data connection of the input module to the assessment module or the cloud computing infrastructure for real-time data transmission.” …. [044-046] “FIG. 1 shows a system 100 according to the invention for interactive communication between a moving object 10 and a user. The system 100 comprises the moving object 10 and a plurality of modules that can comprise both integrated or assigned processors and/or memory units. In particular, the moving object 10 is an electric bicycle. However, it can also be a motor vehicle, an autonomously driving motor vehicle, an agricultural vehicle such as a combine harvester, a robot in production or in service and care facilities, or a watercraft or a flying object such as an air taxi. In one embodiment, the moving object 10 may also be an auxiliary device for people with visual impairments in order to move safely along a route, such as in the form of a rollator or a similar rolling device. The moving object 10 is used by a user as a means of transport or as a means of support when traveling along a route. A “module” can therefore be understood in connection with the invention as meaning, for example, a processor and/or a memory unit for storing program instructions. For example, the module is specifically configured to execute the program instructions in such a way as to implement or realize the method according to the invention or a step of the method according to the invention.” Support for the input module is found as seen in [072-073] The input module 200 is provided for the purpose of capturing first user-specific data 250 and second user-specific data 290. The first user-specific data 250 are data input by a user by means of a user interface 240. The user interface 240 is therefore designed to input and generate data 250 in the form of text messages and/or voice messages and/or images and graphics. For the input of the data 250, a keyboard, a microphone, a camera and/or a display designed as a touch screen are provided in particular. In addition, the input module 200 is connected to second sensors 270 which capture physiological and/or physical reactions of a user when traveling along a route with the moving object 10. The sensors 270 for capturing physiological and/or physical parameters of a user are in particular sensors which are attached to the body of the user or are connected to the body. In particular, a sensor 270 may be designed as a blood pressure monitor, as a heart rate monitor and/or a temperature gage. A possible embodiment of a sensor 270 is a fitness wristband such as from FITBIT® or other manufacturers, which continuously measure the heart rate. These fitness bands can be attached to the user's wrist and the measured data can be easily read out. The pulse, and thus the heart rate, in these devices is generally measured optically by means of the changed reflection behavior of emitted LED light in the case of a change in the blood flow due to the contraction of the blood capillary vessels when the heart beats. The device typically emits light in the green wavelength range into the tissue on the wrist and measures the reflected light. Since blood strongly absorbs the light in this wavelength range, the measured light intensity fluctuates when the blood vessels pulsate, from which the heart rate can be determined. In a stressful situation, the heart rate accelerates, and so the changed heart rate is a good indicator of the occurrence of a stressful situation.” Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-15 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding Claim 1, it is unclear what is being claimed by the limitation of “using a software application (550) of the scenario module (500) and transmitting the generated scenario to an output module (700) for generating (S30) at least one scenario from the sensor data (350) for the traffic event in the environment of the moving object (10)”. The limitation is unclear as it seems to transmit a generated scenario to an output module for generating the scenario which seems to be a contradiction as the generated scenario has not been generated. Examiner will interpret this limitation being met when a generated scenario from sensor data by a software application is transmitted to an output module (similar to claim 9). Regarding Claim 1, it is unclear what is being claimed by the limitation of “transmitting (S50) the first data (250) and/or the second data (290) to an assessment module (400)”. The limitation is unclear as it seems to transmit data to the assessment module but nothing seems to be done with this data, it would seem the assessment module is part of the following “generating” limitation as seen in claim 1. Examiner will interpret this limitation being met when a generated user-specific function is generated from the data using the assessment module (similar to claim 9). Claim 1 recites the limitation "a user" in line 16. There is insufficient antecedent basis for this limitation in the claim. Claim 1 recites the limitation "the first data" in lines 19 and 21-22. There is insufficient antecedent basis for this limitation in the claim. Claim 1 recites the limitation "the second data" in lines 19 and 22. There is insufficient antecedent basis for this limitation in the claim. Claim 1 recites the limitation " user-specific output data" in line 28. There is insufficient antecedent basis for this limitation in the claim. Claim 1 recites the limitation "the user-specific output data" in line 29. There is insufficient antecedent basis for this limitation in the claim. Claim 3 recites the limitation "the recorded sensor data" in line 8. There is insufficient antecedent basis for this limitation in the claim. Claim 5 recites the limitation "the data connection" in lines 4-5 and 6. There is insufficient antecedent basis for this limitation in the claim. Claim 9 recites the limitation "a sensor apparatus" in line 9. There is insufficient antecedent basis for this limitation in the claim. Claim 9 recites the limitation "an environment" in line 12. There is insufficient antecedent basis for this limitation in the claim. Claim 9 recites the limitation "a user" in line 18-19. There is insufficient antecedent basis for this limitation in the claim. Claim 9 recites the limitation "an assessment module" in line 21. There is insufficient antecedent basis for this limitation in the claim. Claim 9 recites the limitation "the first data" in lines 21 and 24. There is insufficient antecedent basis for this limitation in the claim. Claim 9 recites the limitation "the second data" in lines 21 and 24. There is insufficient antecedent basis for this limitation in the claim. Claim 9 recites the limitation "the generated scenarios" in line 28. There is insufficient antecedent basis for this limitation in the claim. Claim 9 recites the limitation " user-specific output data" in line 30. There is insufficient antecedent basis for this limitation in the claim. Claim 9 recites the limitation "the user-specific output data" in line 31. There is insufficient antecedent basis for this limitation in the claim. Claim 11 recites the limitation "the recorded sensor data" in line 8. There is insufficient antecedent basis for this limitation in the claim. Claim 12 recites the limitation "the data connection" in lines 4-5 and 6. There is insufficient antecedent basis for this limitation in the claim. */ Regarding Claims 1, 3, 9 and 11, the use of “software application (450,550,750)” does not introduce antecedent basis issues, as it seems they are different applications, but examiner would ask that, if possible, the language be clarified to prevent having so many software applications. In addition, all claims would be in better shape if cleaned up to remove the numbers in parentheses for clarity. /*. All dependent claims are rejected for depending on rejected independent claims. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-15 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. 101 Analysis – Step 1 Claim 1 is directed to a method, claim 9 is directed to a system and claim 15 is directed to one or more non-transitory computer-readable media. Therefore, claims 1, 9 and 15 are within at least one of the four statutory categories. 101 Analysis – Step 2A, Prong I Regarding Prong I of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether they recite subject matter that falls within one of the follow groups of abstract ideas: a) mathematical concepts, b) certain methods of organizing human activity, and/or c) mental processes. Independent claim 1 includes limitations that recite an abstract idea (emphasized below) and will be used as a representative claim for the remainder of the 101 rejection. The other analogous claims 9 and 15 are rejected for the same reasons as the representative claim 1 as discussed here. Claim 1 recites: A method for interactive communication between a moving object (10) and a user when traveling along a route with a variety of scenarios, wherein a scenario represents a traffic event in a temporal sequence, the method comprising: using first sensors (340) of a sensor apparatus (300) of the moving object (10) for capturing (S10) sensor data (350) relating to an environment of the moving object (10); transmitting (S20) the sensor data (350) to a scenario module (500); using a software application (550) of the scenario module (500) and transmitting the generated scenario to an output module (700) for generating (S30) at least one scenario from the sensor data (350) for the traffic event in the environment of the moving object (10); capturing (S40) first user-specific data (250), as voice messages, text messages and/or images, and/or second user- specific data (290) as measurement signals from second sensors (270), the first user-specific data (250) being input by a user by means of a user interface (240), and the second sensors (270) measuring physiological and/or physical parameters of the user; transmitting (S50) the first data (250) and/or the second data (290) to an assessment module (400); generating (S60) a user-specific assessment function (470) from the first data (250) and the second data (290) by means of a software application (450) and transmitting the user-specific assessment function (470) to an output module (700); creating (S70) output data (770) by means of a software application (750) of the output module (700), wherein the software application (750) assesses the generated scenarios with the user-specific assessment function (470) and generates user-specific output data (770) therefrom; outputting (S80) the user-specific output data (770) to the user. The examiner submits that the foregoing bolded limitation(s) constitute a “mental process” because under its broadest reasonable interpretation, the claim covers performance of the limitation in the human mind. For example, “generating …” and “creating …” all the various data in the context of this claim encompasses a person looking at data collected (received, detected, etc.) and forming a simple judgement (determination, analysis, comparison, etc.) either mentally or using a pen and paper. Accordingly, the claim recites at least one abstract idea. The Examiner notes that under MPEP 2106.04(a)(2)(III), the courts consider a mental process (thinking) that "can be performed in the human mind, or by a human using a pen and paper" to be an abstract idea. CyberSource Corp. v. Retail Decisions, Inc., 654 F.3d 1366, 1372, 99 USPQ2d 1690, 1695 (Fed. Cir. 2011). As the Federal Circuit explained, "methods which can be performed mentally, or which are the equivalent of human mental work, are unpatentable abstract ideas the ‘basic tools of scientific and technological work’ that are open to all.’" 654 F.3d at 1371, 99 USPQ2d at 1694 (citing Gottschalk v. Benson, 409 U.S. 63, 175 USPQ 673 (1972)). See also Mayo Collaborative Servs. v. Prometheus Labs. Inc., 566 U.S. 66, 71, 101 USPQ2d 1961, 1965 ("‘[M]ental processes[] and abstract intellectual concepts are not patentable, as they are the basic tools of scientific and technological work’" (quoting Benson, 409 U.S. at 67, 175 USPQ at 675)); Parker v. Flook, 437 U.S. 584, 589, 198 USPQ 193, 197 (1978) (same). 101 Analysis – Step 2A, Prong II Regarding Prong II of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether the claim, as a whole, integrates the abstract into a practical application. As noted in the 2019 PEG, it must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception. The courts have indicated that additional elements merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application.” In the present case, the additional limitations beyond the above-noted abstract idea are as follows (where the underlined portions are the “additional limitations” while the bolded portions continue to represent the “abstract idea”): A method for interactive communication between a moving object (10) and a user when traveling along a route with a variety of scenarios, wherein a scenario represents a traffic event in a temporal sequence, the method comprising: using first sensors (340) of a sensor apparatus (300) of the moving object (10) for capturing (S10) sensor data (350) relating to an environment of the moving object (10); transmitting (S20) the sensor data (350) to a scenario module (500); using a software application (550) of the scenario module (500) and transmitting the generated scenario to an output module (700) for generating (S30) at least one scenario from the sensor data (350) for the traffic event in the environment of the moving object (10); capturing (S40) first user-specific data (250), as voice messages, text messages and/or images, and/or second user- specific data (290) as measurement signals from second sensors (270), the first user-specific data (250) being input by a user by means of a user interface (240), and the second sensors (270) measuring physiological and/or physical parameters of the user; transmitting (S50) the first data (250) and/or the second data (290) to an assessment module (400); generating (S60) a user-specific assessment function (470) from the first data (250) and the second data (290) by means of a software application (450) and transmitting the user-specific assessment function (470) to an output module (700); creating (S70) output data (770) by means of a software application (750) of the output module (700), wherein the software application (750) assesses the generated scenarios with the user-specific assessment function (470) and generates user-specific output data (770) therefrom; outputting (S80) the user-specific output data (770) to the user. For the following reason(s), the examiner submits that the above identified additional limitations do not integrate the above-noted abstract idea into a practical application. Regarding the additional limitations above, the examiner submits that these limitations are insignificant extra-solution activities that merely use a computer (processor) to perform the process. In particular, the capturing steps from/using sensor system(s) are recited at a high level of generality (i.e. as a general means of receiving information for use in the generating and other steps), and amounts to mere data gathering, which is a form of insignificant extra-solution activity. The transmitting steps are also recited at a high level of generality and amounts to mere post solution action, which is a form of insignificant extra-solution activity. Lastly, claims 1, 9 and 15 further recite “a software application” and “A computer program product (900) comprising a non-transitory executable program code (950) that is configured to carry out the method of claim 1 when executed” merely describes how to generally “apply” the otherwise mental judgements in a generic or general-purpose vehicle environment with a computer. See Alice Corp. Pty. Ltd. v. CLS Bank Int'l, 573 U.S. at 223 (“[T]he mere recitation of a generic computer cannot transform a patent-ineligible abstract idea into a patent-eligible invention.”). The device(s) and processor(s) are recited at a high level of generality and merely automates the steps. Thus, taken alone, the additional elements do not integrate the abstract idea into a practical application. Further, looking at the additional limitation(s) as an ordered combination or as a whole, the limitation(s) add nothing that is not already present when looking at the elements taken individually. For instance, there is no indication that the additional elements, when considered as a whole, reflect an improvement in the functioning of a computer or an improvement to another technology or technical field, apply or use the above-noted judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition, implement/use the above-noted judicial exception with a particular machine or manufacture that is integral to the claim, effect a transformation or reduction of a particular article to a different state or thing, or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is not more than a drafting effort designed to monopolize the exception (MPEP § 2106.05). Accordingly, the additional limitation(s) do/does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. 101 Analysis – Step 2B Regarding Step 2B of the 2019 PEG, as discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a processor to perform the steps amounts to nothing more than applying the exception using a generic computer component. Generally applying an exception using a generic computer component cannot provide an inventive concept. And as discussed above, the additional limitations discussed above are insignificant extra-solution activities. The additional limitations of receiving information and values/features detecting/detectable are well-understood, routine and conventional activities because the background recites that the sensors are all conventional sensors, and the specification does not provide any indication that the processor is anything other than a conventional computer. MPEP 2106.05(d)(II), and the cases cited therein, including Intellectual Ventures I, LLC v. Symantec Corp., 838 F.3d 1307, 1321 (Fed. Cir. 2016), TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610 (Fed. Cir. 2016), and OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363 (Fed. Cir. 2015), indicate that mere collection or receipt of data over a network is a well‐understood, routine, and conventional function when it is claimed in a merely generic manner. Hence, the claim is not patent eligible. Dependent claims 2-8 and 10-14 do not recite any further limitations that cause the claims to be patent eligible. Rather, the limitations of dependent claims are directed toward additional aspects of the judicial exception and/or additional elements that do not integrate the judicial exception into a practical application. The dependent claims are merely defining terms or have additional steps such as “generate”. Therefore, dependent claims 2-8 and 10-14 are not patent eligible. Therefore, claims 1-15 are ineligible under 35 USC §101. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-3, 5, 7, 9-12 and 14-15 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Sobhany (US 20200239003 A1). Examiners Note: In regards to the effect of the preamble towards examining purposes, please see MPEP 2111.02 II. Regarding Claim 1, Sobhany teaches A method for interactive communication between a moving object (10) and a user when traveling along a route with a variety of scenarios, wherein a scenario represents a traffic event in a temporal sequence, the method comprising (see at least [¶027, 031 & Claim 1]): using first sensors (340) of a sensor apparatus (300) of the moving object (10) for capturing (S10) sensor data (350) relating to an environment of the moving object (10) (Using sensors of a moving object/vehicle to capture sensor data of the environment of the moving object/vehicle. see at least [¶031, 042, 073-074 & 0201]); transmitting (S20) the sensor data (350) to a scenario module (500) (Transmitting the sensor data to be processed by a module. see at least [¶042, 054, 064-066 & 073]); using a software application (550) of the scenario module (500) and transmitting the generated scenario to an output module (700) for generating (S30) at least one scenario from the sensor data (350) for the traffic event in the environment of the moving object (10) (Generating a scenario from the sensor data for a traffic event/condition/context in the environment of the moving object/vehicle and transmitting the scenario to another module for further processing. see at least [¶042, 0564-066, 073, 091, 0103, 0160, 0201 & 0211-0212]); capturing (S40) first user-specific data (250), as voice messages, text messages and/or images, and/or second user- specific data (290) as measurement signals from second sensors (270), the first user-specific data (250) being input by a user by means of a user interface (240), and the second sensors (270) measuring physiological and/or physical parameters of the user (Obtaining user specific data such as voice messages or images input by a user via an input interface or obtaining additional user specific data from sensors that measure emotional or health related parameters of a user. as see at least [¶043, 045, 065, 067, 0123, 0140, 0148-0150]); transmitting (S50) the first data (250) and/or the second data (290) to an assessment module (400) (Transmitting the user data to a module for further processing. see at least [¶064-066 & 0133-0136]); generating module(S60) a user-specific assessment function (470) from the first data (250) and the second data (290) by means of a software application (450) and transmitting the user-specific assessment function (470) to an output module (700) (Generating a user specific assessment/model from the collected data and transmitting this data to another module for further processing. see at least [¶099-0104, 0107-0111]); creating (S70) output data (770) by means of a software application (750) of the output module (700), wherein the software application (750) assesses the generated scenarios with the user-specific assessment function (470) and generates user-specific output data (770) therefrom (Creating user specific output data from the assessed generated scenarios that the vehicle is in and the user specific assessment/profile. see at least [¶099-0104, 0161 & 0211-0212]); outputting (S80) the user-specific output data (770) to the user (Outputting the user specific output data to the user. see at least [¶041, 068, 099-0104, 0161 & 0211-0212]). Regarding Claim 2 and 10, Sobhany teaches all of the limitations of claims 1 and 9 as shown above, furthermore, Sobhany teaches wherein the first sensors (340) of the sensor apparatus (300) comprise one or more radar systems with one or more radar sensors, and/or one or more LIDAR systems for optical distance and speed measurement, and/or one or more image-recording 2D/3D cameras in the visible range and/or in the IR range and/or in the UV range, and/or GPS systems, and wherein one or more of the second sensors (270) is/are designed as a blood pressure monitor and/or heart rate monitor and/or temperature gage and/or acceleration sensor and/or speed sensor and/or capacitive sensor and/or inductive sensor and/or voltage sensor (The first sensors can include LIDAR or cameras and the second sensors can measure heart rate, temperature, acceleration, speed, or capacitive data. see at least [¶042 & 0148-0150]). Regarding Claim 3 and 11, Sobhany teaches all of the limitations of claims 1 and 9 as shown above, furthermore, Sobhany teaches wherein the software application (450) of the assessment module (400) and/or the software application (550) of the scenario module (500) and/or the software application (750) of the output module (700) comprise(s) artificial intelligence and machine learning algorithms, and/or at least one reinforcement learning agent (LV), for generating the user-specific assessment function (470) and/or for generating scenarios from the recorded sensor data (350) and/or for generating output data (770) (The modules can include artificial intelligence and machine learning algorithms for generating output data and vehicle state/scenario. see at least [¶064-067 & 0136-0137]). Regarding Claim 5 and 12, Sobhany teaches all of the limitations of claims 1 and 9 as shown above, furthermore, Sobhany teaches wherein the assessment module (400), the scenario module (500) and the output module (700) are integrated in a cloud computing infrastructure (800), and a 5G mobile radio connection or 6G mobile radio connection is used for the data connection of the sensor apparatus (300) to the scenario module (500) or the cloud computing infrastructure (800) and for the data connection of the input module (200) to the assessment module (400) or the cloud computing infrastructure (800) for real-time data transmission (The modules used for processing data can be integrated into a cloud computing infrastructure and data can be sent to the modules in the remote cloud server via 5G mobile connections. see at least [¶034-035 & 0248]). Regarding Claim 7 and 14, Sobhany teaches all of the limitations of claims 1 and 9 as shown above, furthermore, Sobhany teaches wherein the output data (770) are voice messages, warning tones and/or music titles (The output data can be voice messages, warning/alerts, or music. see at least [¶041, 0102, 0161-0163 & 0211-0212]). Regarding Claim 9, Sobhany teaches A system (100) for interactive communication between a moving object (10) and a user when traveling along a route with a variety of scenarios, wherein a scenario represents a traffic event in a temporal sequence, the system comprising (see at least [¶027, 031 & 0248]): an input module (200), a sensor apparatus (300), an assessment module (400), a scenario module (500), and an output module (700) (Input devices and sensors along with modules for processing data. The modules are all computer programs that are implemented by processing devices. see at least [¶042-045, 053 & 0252]); the sensor apparatus (300) being designed to capture sensor data (350) relating to an environment of the moving object (10) by means of first sensors (340) of a sensor apparatus (300) of the moving object (10) (Using sensors of a moving object/vehicle to capture sensor data of the environment of the moving object/vehicle. see at least [¶031, 042, 073-074 & 0201]); and to transmit the sensor data (350) to the scenario module (500) (Transmitting the sensor data to be processed by a module. see at least [¶042, 054, 064-066 & 073]); the scenario module (500) being designed to generate at least one scenario from the sensor data (350) for the traffic event in an environment of the moving object (10) by means of a software application (550) and to transmit the generated scenario to an output module (700) (Generating a scenario from the sensor data for a traffic event/condition/context in the environment of the moving object/vehicle and transmitting the scenario to another module for further processing. see at least [¶042, 0564-066, 073, 091, 0103, 0160, 0201 & 0211-0212]); the input module (200) being designed to capture first user-specific data (250) in the form of voice messages, text messages and/or images, and/or second user-specific data (290) in the form of measurement signals from second sensors (270), the first user-specific data (250) being input by a user by means of a user interface (240), and the second sensors (270) measure physiological and/or physical parameters of the user (Obtaining user specific data such as voice messages or images input by a user via an input interface or obtaining additional user specific data from sensors that measure emotional or health related parameters of a user. as see at least [¶043, 045, 065, 067, 0123, 0140, 0148-0150]); and transmit the first data (250) and/or the second data (290) to an assessment module (400) (Transmitting the user data to a module for further processing. see at least [¶064-066 & 0133-0136]); the assessment module (400) generating a user-specific assessment function (470) from the first data (250) and the second data (290) by means of a software application (450) and transmitting the user-specific assessment function (470) to the output module (700) (Generating a user specific assessment/model from the collected data and transmitting this data to another module for further processing. see at least [¶099-0104, 0107-0111]); the output module (700) creating output data (770) by means of a software application (750) that assesses the generated scenarios with the user-specific assessment function (470) and generates user-specific output data (770) therefrom (Creating user specific output data from the assessed generated scenarios that the vehicle is in and the user specific assessment/profile. see at least [¶099-0104, 0161 & 0211-0212]); and the output module outputting the user-specific output data (770) directly or indirectly to the user by means of a transmission apparatus (Outputting the user specific output data to the user via an interface device. see at least [¶041, 068, 099-0104, 0161 & 0211-0212]). Regarding Claim 15, Sobhany teaches A computer program product (900) comprising a non-transitory executable program code (950) that is configured to carry out the method of claim 1 when executed (A computer program comprising of non-transitory program code can be executed to carry out the method of claim 1. Please see the rejection above for claim 1. see at least [¶053 & Claim 10]). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Sobhany (US 20200239003 A1) in view of Sendhoff (US 20170113685 A1). Regarding Claim 4, Sobhany teaches all of the limitations of claim 1 as shown above, Sobhany does not explicitly teach wherein further data from a database (850) are used to generate the output data (770). However, Sendhoff does teach wherein further data from a database (850) are used to generate the output data (770) (Additional data from databases can be used to generate an output to deal with a traffic situation. see at least [¶08-011]). Sendhoff would be in a similar field as it also deals in the area of assisting a driver of a vehicle. Therefore, it would have been obvious to those having ordinary skill in the art before the effective filing date of the instant application to modify Sobhany to use the technique of having further data from a database are used to generate the output data as taught by Sendhoff. Doing so would lead to improved response to a traffic situation by a user (see at least [¶011]). Claims 6 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Sobhany (US 20200239003 A1) in view of Moustafa (US 20220126864 A1). Regarding Claims 6 and 13, Sobhany teaches all of the limitations of claims 1 and 9 as shown above, Sobhany does not explicitly teach wherein a first version of the assessment function (470) is created in a training phase by means of a training set of user-specific data (250, 290). However, Moustafa does teach wherein a first version of the assessment function (470) is created in a training phase by means of a training set of user-specific data (250, 290) (A driver state model/assessment is created from a training phase using a set of user specific data. see at least [¶0520 & 0522-0523]). Moustafa would be in a similar field as it also deals in the area of vehicle control and assist using sensor data. Therefore, it would have been obvious to those having ordinary skill in the art before the effective filing date of the instant application to modify Sobhany to use the technique of having a first version of the assessment function is created in a training phase by means of a training set of user-specific data as taught by Moustafa. Doing so would lead to improved detection of a state of a driver (see at least [¶0520]). Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Sobhany (US 20200239003 A1) in view of Yamaguchi (US 20220012561 A1). Regarding Claim 8, Sobhany teaches all of the limitations of claim 1 as shown above, Sobhany does not explicitly teach wherein the scenarios are designated by labels for a classification by the assessment function (470). However, Yamaguchi does teach wherein the scenarios are designated by labels for a classification by the assessment function (470) (The scenarios can have designated labels that can be used for classifying the traffic scenario. see at least [¶031-032 & 052-056]). Yamaguchi would be in a similar field as it also deals in the area of classifying traffic scenarios. Therefore, it would have been obvious to those having ordinary skill in the art before the effective filing date of the instant application to modify Sobhany to use the technique of having the scenarios are designated by labels for a classification by the assessment function as taught by Yamaguchi. Doing so would lead to improved vehicle response form the classiifcation (see at least [¶047-049]). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. METHOD, SYSTEM, AND COMPUTER PROGRAM PRODUCT FOR DETERMINING SAFETY-CRITICAL TRAFFIC SCENARIOS FOR DRIVER ASSISTANCE SYSTEMS (DAS) AND HIGHLY AUTOMATED DRIVING FUNCTIONS (HAD) (US 20220080975 A1) Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOISES GASCA ALVA JR whose telephone number is (571)272-3752. The examiner can normally be reached Monday-Friday 6:30 - 4:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Faris Almatrahi can be reached on (313) 446-4821. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-2 9197(toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MOISES GASCA ALVA/Examiner, Art Unit 3667 /FARIS S ALMATRAHI/Supervisory Patent Examiner, Art Unit 3667
Read full office action

Prosecution Timeline

Nov 29, 2024
Application Filed
Feb 05, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12601140
AUTOMATICALLY STEERING A MOBILE MACHINE
2y 5m to grant Granted Apr 14, 2026
Patent 12591242
METHOD AND APPARATUS FOR OBTAINING OBSERVATION DATA OF AN ENVIRONMENT
2y 5m to grant Granted Mar 31, 2026
Patent 12565199
SYSTEMS AND METHODS FOR RAPID DECELERATION
2y 5m to grant Granted Mar 03, 2026
Patent 12504757
AUTONOMOUS VEHICLE SAFETY SYSTEM AND METHOD
2y 5m to grant Granted Dec 23, 2025
Patent 12485724
SYSTEM AND METHOD FOR TEMPERATURE CONTROL WHILE CHARGING AN ELECTRIC TRANSPORT DEVICE INSIDE A VEHICLE
2y 5m to grant Granted Dec 02, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
44%
Grant Probability
99%
With Interview (+57.9%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 71 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month