Prosecution Insights
Last updated: April 19, 2026
Application No. 17/996,950

SIMULATION METHOD FOR A PIXEL HEADLAMP SYSTEM

Non-Final OA §101§103§112
Filed
Oct 24, 2022
Examiner
LEATHERS, EMILY GORMAN
Art Unit
2187
Tech Center
2100 — Computer Architecture & Software
Assignee
Dspace GmbH
OA Round
1 (Non-Final)
75%
Grant Probability
Favorable
1-2
OA Rounds
4y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
3 granted / 4 resolved
+20.0% vs TC avg
Strong +33% interview lift
Without
With
+33.3%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
31 currently pending
Career history
35
Total Applications
across all art units

Statute-Specific Performance

§101
31.5%
-8.5% vs TC avg
§103
33.6%
-6.4% vs TC avg
§102
8.8%
-31.2% vs TC avg
§112
23.6%
-16.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 4 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority The application claims domestic priority to PCT/EP2021/060350 with priority date 04/21/2021 and domestic priority to German patent application DE 10 2020 112 284.5 with priority date 05/06/2020. Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Specification The use of the terms LucidDrive, LucidShape, Synopsys, ALiSiA, and Hella KGaA, which are trade names or marks used in commerce, has been noted in this application. The term should be accompanied by the generic terminology; furthermore the term should be capitalized wherever it appears or, where appropriate, include a proper symbol indicating use in commerce such as ™, SM , or ® following the term. Please see at least specification paragraphs 10-11. Although the use of trade names and marks used in commerce (i.e., trademarks, service marks, certification marks, and collective marks) are permissible in patent applications, the proprietary nature of the marks should be respected and every effort made to prevent their use in any manner which might adversely affect their validity as commercial marks. The disclosure is objected to because of the following informalities: In ¶, “as the next lined-up virtual is analyzed” does not make sense because it appears that a word is missing after the term “virtual” Appropriate correction is required. Claim Objections Claims 1-10 are objected to because of the following informalities: The claims are labeled as Claim # instead of simply #. The form of claims required for several claims is that they should be numbered consecutively in Arabic numerals. The inclusion of the word “Claim” is unnecessary. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-10 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The claims are generally narrative and indefinite, failing to conform with current U.S. practice. They appear to be a literal translation into English from a foreign document and are replete with grammatical and idiomatic errors. Regarding claim 1: Claim 1 recites in the preamble “wherein a two-dimensional distribution of illuminance in an illuminable area of a scene illuminable by the actual pixel headlamp based on features of different regions of the illuminable area of the scene, comprising:” does not make grammatical sense and lacks clarity for what is particularly being claimed. It appears that between the originally filed claims and the preliminary amendment, a word or words corresponding to the control of the actual pixel headlamp were removed between the terms “pixel headlamp” and “based on features”. “the illuminance” in the preamble of claim 1 lacks antecedent basis. “the road surroundings” in step a) lacks antecedent basis. Recitation of the word “of” in “a night drive of the virtual motor vehicle” would be more clearly understood if rewritten to state “a night drive by the virtual motor vehicle” in step c). “in a recordable portion” in step d) is unclear as to what the recordable portion is a portion of. “recording virtual surroundings data by the virtual surroundings sensor in a recordable portion, which is recordable by the virtual surroundings sensor of the illuminable area illuminable by the virtual pixel headlamp” in step d) appears to contain redundant wording that lacks conciseness and clarity. “different regions of the illuminable area of the scene” – It is unclear if the illuminable area of the scene corresponds to being illuminable by the virtual pixel headlamp or the actual pixel headlamp. Previous recitations of the illuminable area appear to refer to the illuminable area illuminable by the virtual pixel headlamp. Accordingly, this limitation has been treated as such. Examiner recommends maintaining consistency in claim elements for clarity. “in the recordable portion” in step g) lacks clarity as to what the recordable portion is a portion of. In claim 1, step f) there is recitation of “changing individual light intensities of discrete affected pixels”. In claim 1, step h) there is recitation of “an obtained light intensity”. It is unclear as to how such value is obtained and as to whether the obtaining step in h) is equivalent to that of the changing step f). If the elements art the same, there should be no separation in language indicating that an intermediate step occurs between “changing” and “obtained”- that is the recitations of “an/the obtained” should read “the changed individual light intensities”. If the elements are different, there appears to be a lack of clarity as to how the surroundings data correlates to an obtained light intensity such that an analysis can be performed. The limitation “performing one of the following steps based on whether or not the obtained light intensity satisfies the illumination rule” is indefinite because it is unclear as to which subsequent step is to be performed if the obtained light intensity satisfies the illumination rule and which step is to be performed if the obtained light intensity does not satisfy the illumination rule. The claim should be rewritten to make clear which step correlates to satisfaction/dissatisfaction of the illumination rule. The meaning of “the group of pixels” in step i) is unclear because a group of pixels is established in step f) which precedes step i) but step j) also incorporates a new group of pixels which may be evaluated in the iterations back into step i). Therefore, it is unclear as to which group of pixels is being referred to so as to form the value pairs. “the previously determined group” in step j) lacks antecedent basis. It appears as though the applicant is referring to a group of pixels established in step f) of the claim. However, the claim also recites iterative steps which would modify what “the previously determined group” would refer to. “wherein a change amount for at least one pixel differs from the change amount for the at least one pixel in the previously determined group” lacks clarity as to the relationship between elements. It appears as though the applicant is distinguishing between two change amounts corresponding to the same pixel of the group in both instances. Examiner recommends revising claim language to clearly indicate that “at least one pixel” corresponds to “the at least one pixel”, for example by stating “wherein the further changed amount for at least one pixel differs from the change amount of the same/corresponding at least one pixel in the previously determined group” or something similar that clearly shows the relationship. “repeating steps g), h), and i) or j)” lacks clarity because it is unclear what the combination and ordering of steps should include through the use of the terms “and” and “or”. Examiner recommends clarifying claim language to distinguish which claims are repeated and which claims are optional to one another. For example, the claim as written could be read to entail ((g and h and j) or i) or (g and h and (j or i)).’ The dependent claims 2-10 incorporate these deficiencies and are rejected under the same rationale. Regarding claim 2: “the group of pixels” is unclear as to whether the element is referring to the group of pixels in step f) or the new group of pixels in step j). Regarding claim 3: “the spatial orientation” lacks antecedent basis. It is unclear as to what is meant by “the spatial orientation in the virtual scene is done on the basis of a…”. Under broadest reasonable interpretation and when read in light of the specification, there is not a clear action that is being performed per the action “done on the basis of”. Plain language interpretation does not further clarify the action being claimed. Further, it is unclear how a spatial orientation can be particularly imparted because “spatial orientation” under plain meaning is an innate ability to perceive and maintain body position and posture relative to the surrounding environment. Where applicant acts as his or her own lexicographer to specifically define a term of a claim contrary to its ordinary meaning, the written description must clearly redefine the claim term and set forth the uncommon definition so as to put one reasonably skilled in the art on notice that the applicant intended to so redefine that claim term. Process Control Corp. v. HydReclaim Corp., 190 F.3d 1350, 1357, 52 USPQ2d 1029, 1033 (Fed. Cir. 1999). The terms “spatial orientation” and “done” in claim 1 are used by the claim where the intended meaning is not ascertainable, while the accepted meaning of the terms respectively are “the ability to perceive, maintain, and navigate one's body position in relation to the surrounding environment, both at rest and during motion” and “completed; finished; through, no longer happening or existing.” The terms are indefinite because the specification does not clearly redefine the terms. Regarding claim 4: The conjunction (and/or) recitations of “at least one environment camera and/or at least one virtual brightness sensor as at least one surroundings sensor” is unclear. It is unclear if both the environment camera or the brightness sensor can be the surroundings sensor or if only the brightness sensor can be the surroundings sensor. For purposes of this examination, the both are potential devices which may be considered a surroundings sensor. Regarding claim 5: “a second group of pixels” lacks clarity because two groups of pixels have already been introduced previously in claim 1 from which this claim depends (“a group of pixels” and “a new group of pixels”) which would indicate this recitation as a third group of pixels. Examiner recommends using consistent terminology across claims to avoid lack of clarity issues as this or distinctly clarifying that the elements are separate from each other otherwise. Claim 6 incorporates these deficiencies and is thus rejected under the same rationale. Regarding claim 6: “a second group of pixels” lacks clarity because two groups of pixels have already been introduced previously in claim 1 from which this claim depends (“a group of pixels” and “a new group of pixels”). Further, “the first group of pixels” lacks antecedent basis and clarity for similar reasons. It is unclear as to if these claim elements are referring to a previously defined group of pixels or if all 4 groups are distinct from one another. Examiner believes that “first group” corresponds to “a group” per claim 1 and “second group” corresponds to “new group” per claim 1. If this interpretation is correct, claim language should be consistent across claims such that repeatedly referenced elements intended to refer to the same thing are named consistently. Regarding claim 7: “the presentation of the virtual scenes” lacks antecedent basis. “the number of virtual scenes” lacks antecedent basis. “the number of repetitions” lacks antecedent basis. “the obtained illumination” lacks antecedent basis. “the clock rate” lacks antecedent basis. It is unclear as to which steps are performed under which circumstances for the claim based on “depending which condition applies earliest”. That is, it is unclear if the number of repetitions corresponds to the number of repetitions required until the obtained illumination satisfies the illumination rule or if the number of repetitions corresponds to the number of repetitions that is temporally possible simply based on the evaluation as to which condition applies earliest. For example, does the satisfaction of condition A earliest mean that condition A dictates the number of repetitions, or alternatively does the satisfaction of condition B earliest mean that condition A dictates the number of repetitions (and vice versa where B occurs earliest dictating B as the basis for the number of repetitions or alternatively where B occurs earliest dictating A as the basis for the number of repetitions)? The applicant should clearly recite which step is performed and according to what condition exactly. Regarding claim 9: “the group of pixels” lacks clarity as to what element is being referenced since a plurality of groups of pixels was introduced previously in claim 1 from which this claim depends. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-10 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The following section follows the 2019 Patent Eligibility Guidance (PEG) for analyzing subject matter eligibility: Step 1 - Statutory Category: Step 1 of the PEG analysis entails considering whether the claimed subject matter falls within the four statutory categories of patentable subject matter identified by 35 U.S.C. 101 (process, machine, manufacture, or composition of matter). Step 2A Prong 1 - Judicial exception: In Step 2A Prong 1, examiners evaluate whether the claim recites a judicial exception (an abstract idea, law of nature, or a natural phenomenon). Step 2a Prong 2 - Integration into a practical application: If claims recite a judicial exception, the claim requires further analysis in Step 2A Prong 2. In Step 2A Prong 2, examiners evaluate whether the claim as a whole integrates the exception into a practical application. Step 2B - Significantly More: If the additional elements identified in Step 2A Prong 2 do not integrate the exception into a practical application, then the claim is directed to the recited judicial exception and requires further analysis under Step 2B- Significantly More. As noted in the MPEP 2106.05(II): The identification of the additional element(s) in the claim from Step 2A Prong 2, as well as the conclusions from Step 2A Prong 2 on the considerations discussed in MPEP 2106.05(a) -(c), (e), (f), and (h) are to be carried over. Claim limitations identified as Insignificant Extra-Solution Activities are further evaluated to determine if the elements are beyond what is well -understood, routine, and conventional (WURC) activity, as dictated by MPEP 2106.05(II). Independent Claims: Claim 1: Step 1: Claim 1 and its dependent claims 2-9 are directed to a method which falls within one of the four statutory categories of a process. Step 2A Prong 1: Claim 1 recites a judicial exception, noted in bold: a) defining a virtual driving scenario, wherein the virtual driving scenario comprises a road and the road surroundings, wherein the road surroundings include vegetation, curbs, road signs, road markings, road users, and/or weather-related features; The claim limitation can be reasonably read to entail making a judgement on details for a virtual driving scenario to include a variety of features. This task can be performed within the human mind or using a pen and paper as an assistive physical aid, since there are no limitations that impose how the scenario is defined and under broadest reasonable interpretation, a human being can write down a set of features characterizing such scenario. Therefore, this claim limitation includes the recitation of the judicial exception of abstract ideas of a mental process. b) defining a virtual motor vehicle, wherein the virtual motor vehicle has a virtual pixel headlamp corresponding to the actual pixel headlamp and a virtual surroundings sensor for recording at least a portion of an illuminable area illuminable by the virtual pixel headlamp; The claim limitation can be reasonably read to entail making a judgment on details for a virtual motor vehicle which include a virtual pixel headlamp and a virtual surroundings sensor. This task can be performed within the human mind or using a pen and paper as an assistive physical aid, since there are no limitations that impose how the virtual motor vehicle is defined and under broadest reasonable interpretation a human can write down a set of features that characterize the motor vehicle. e) analyzing the recorded virtual surroundings data to automatically identify at a spatial selection region in the virtual scene, wherein the spatial selection region indicates a region in which illuminance is to be changed due to a predefined illumination rule dependent on features of different regions of the illuminable area of the scene; The claim limitation can be reasonably read to entail evaluating recorded data so as to identify a spatial selection region. This task can be performed within the human mind or using a pen and paper as an assistive physical aid. When read in light of the specification, ¶24, the disclosure states that a spatial selection region is automatically identified and does not involve any human intervention from a human operator or developer. However, the process is still one which could be performed practically in the human mind, except for the recitation of generic computing components to perform the mental process. The courts do not distinguish between mental processes that can be performed entirely in the human mind and those which are performed using a computer. Accordingly, the limitation still recites the abstract idea of mental process. f) determining a group of pixels of the virtual pixel headlamp that are affected based on the identified spatial selection region, and changing individual light intensities of discrete affected pixels in the determined group of pixels of the virtual pixel headlamp according to the illumination rule; The claim limitation can be reasonably read to entail making an evaluation and judgment as to a group of pixels that are affected based on an identified spatial region. The claim further entails making a judgement to modify light intensities according to an evaluation of the illumination rule. This task can be performed within the human mind or using a pen and paper as an assistive physical aid, for example by writing down a corresponding intensity for each discrete pixel value that corresponds to the rules. Therefore, this claim limitation includes the recitation of the judicial exception of abstract ideas of a mental process. h) analyzing the re-recorded virtual surroundings data as to whether an obtained light intensity satisfies the illumination rule in the spatial selection region; and The claim limitation can be reasonably read to entail evaluating virtual surroundings and light intensity values with regard for the illumination rule. This task can be performed within the human mind or using a pen and paper as an assistive physical aid. Therefore, this claim limitation includes the recitation of the judicial exception of abstract ideas of a mental process. performing one of the following steps based on whether or not the obtained light intensity satisfies the illumination rule: The claim limitation can be reasonably read to entail evaluating a light intensity with regard to an illumination rules. This task can be performed within the human mind or using a pen and paper as an assistive physical aid. Therefore, this claim limitation includes the recitation of the judicial exception of abstract ideas of a mental process. i) generating [[…]] value pairs for a control to be created based on the obtained light intensity satisfying the illumination rule, wherein the value pairs are formed from the group of pixels and respective change amounts for the discrete pixels in the group; or The claim limitation can be reasonably read to entail making a judgment of value pairs for use as a control based on the evaluation of light intensity with regard to an illumination rule. This task can be performed within the human mind or using a pen and paper as an assistive physical aid, for example by writing down value pairs accordingly. Therefore, this claim limitation includes the recitation of the judicial exception of abstract ideas of a mental process. j) determining a new group of pixels of the virtual pixel headlamp that are affected based on the identified spatial selection region, wherein the new group differs from the previously determined group at least on account of one pixel, and/or further changing the individual light intensities of the discrete pixels of the virtual pixel headlamp according to the illumination rule, wherein a change amount for at least one pixel differs from the change amount for the at least one pixel in the previously determined group, and repeating steps g), h), and i) or j). The claim limitation can be reasonably read to entail making a judgement as to the affected group of pixels and/or further making a judgement as to how the light intensities of the discrete pixels should be modified according to an evaluation of the illumination rule. This task can be performed within the human mind or using a pen and paper as an assistive physical aid. Therefore, this claim limitation includes the recitation of the judicial exception of abstract ideas of a mental process. Therefore, the claim recites a judicial exception. Step 2A Prong 2: Additional elements were identified and are noted in italics. c) simulating a night drive of the virtual motor vehicle in the defined virtual driving scenario, with the virtual pixel headlamp switched on, by simulating successive virtual scenes, wherein each virtual scene represents a still image from the simulated night drive together with the virtual motor vehicle in the defined virtual driving scenario;- This limitation has been identified as Mere Instructions to Apply an Exception (MPEP 2106.05(f)) for invoking the use of generic computing components of a simulator as a tool to perform the judicial exception. d) recording virtual surroundings data by the virtual surroundings sensor in a recordable portion, which is recordable by the virtual surroundings sensor of the illuminable area illuminable by the virtual pixel headlamp in at least one of the virtual scenes,- This limitation has been identified as Insignificant Extra Solution Activity (MPEP 2106.05(g)) of mere data gathering and has further been identifies as Field of Use and Technological Environment (MPEP 2106.05(h)) for generally linking the use of the judicial exception to a particular technological environment or field of use. g) re-recording virtual surroundings data by the virtual surroundings sensor in the recordable portion;- This limitation has been identified as Insignificant Extra Solution Activity (MPEP 2106.05(g)) of mere data gathering and has further been identifies as Field of Use and Technological Environment (MPEP 2106.05(h)) for generally linking the use of the judicial exception to a particular technological environment or field of use. and storing - This limitation has been identified as Insignificant Extra Solution Activity (MPEP 2106.05(g)) of mere data gathering. The courts have found that merely including instructions to implement an abstract idea on a computer or merely using a computer as a tool to perform an abstract idea (Mere Instructions to Apply an Exception (MPEP 2106.05(f))); adding insignificant extra- solution activity to the judicial exception (Insignificant Extra Solution Activity (MPEP 2106.05(g))); and generally linking the use of a judicial exception to a particular technological environment or field of use (Field of Use and Technological Environment (MPEP 2106.05(h))) does not integrate the judicial exception into a practical application. When viewed independently and within the claim as a whole, the additional element does not appear to integrate the judicial exception into a practical application. Step 2B: As discussed in Step 2A Prong 2, additional elements were identified as Insignificant Extra Solution Activity (MPEP 2106.05(g)) which must be further evaluated to determine if they are beyond WURC activities. Additional elements identified otherwise and conclusions from Step 2A Prong 2 are carried over for evaluating if the claim, as a whole, amounts to an inventive concept that is significantly more than the judicial exception: d) recording virtual surroundings data by the virtual surroundings sensor in a recordable portion, which is recordable by the virtual surroundings sensor of the illuminable area illuminable by the virtual pixel headlamp in at least one of the virtual scenes,- This limitation has been identified as Insignificant Extra Solution Activity (MPEP 2106.05(g)) of mere data gathering as stated previously. Under broadest reasonable interpretation, the claim encompasses receiving data over a network and storing data in memory. These computer functions have been recognized by the courts as computer functions that are well understood, routine, and conventional activities when claimed in a merely generic manner. g) re-recording virtual surroundings data by the virtual surroundings sensor in the recordable portion;- This limitation has been identified as Insignificant Extra Solution Activity (MPEP 2106.05(g)) of mere data gathering as stated previously. Under broadest reasonable interpretation, the claim encompasses receiving data over a network and storing data in memory. These computer functions have been recognized by the courts as computer functions that are well understood, routine, and conventional activities when claimed in a merely generic manner. and storing - This limitation has been identified as Insignificant Extra Solution Activity (MPEP 2106.05(g)) of mere data gathering as stated previously. Under broadest reasonable interpretation, storing data in memory. This computer function has been recognized by the courts as a computer function that is well understood, routine, and conventional activities when claimed in a merely generic manner. The courts have found that simply appending insignificant extra solution activities that are well-understood, routine, and conventional activities to the judicial exception does not qualify the limitations as “significantly more” than the recited judicial exception. The remaining additional elements were identified as Mere Instructions to Apply an Exception (MPEP 2106.05(f)) and Field of Use and Technological Environment (MPEP 2106.05(h)), as stated previously. The courts have found that merely using a computer as a tool to perform a mental process and generally linking the use of a judicial exception to a particular technological environment does not qualify the limitations as “significantly more” than the recited judicial exception. With the additional elements viewed independently and as part of the ordered combination, the claim as a whole does not appear to amount to significantly more than the recited judicial exception because the claim is using generic computing components recited at a high level of generality and functioning in their normal capacity in conjunction with well-understood, routine, and conventional activity to enable the performance of a task that can practically be performed within the human mind or using pen and paper as an assistive physical aid. Therefore, the claim does not include additional elements, alone or in combination that are sufficient to amount to significantly more than the recited judicial exception. Conclusion: Based on this rationale, the claim has been deemed to be ineligible subject matter under 35 U.S.C. 101. Dependent Claims: Examiner notes limitations identified as judicial exceptions are indicated in italicized bold and limitations identified as additional elements are indicated using italics. Claim 2 Step 1: Regarding dependent claim 2, the judicial exception of independent claim 1 is further incorporated. The claim falls within the corresponding statutory category as stated previously. Step 2A Prong 1: Claim 2 does not recite any additional judicial exceptions. Step 2A Prong 2: Claim 2 additionally recites the limitation wherein the value pairs from the group of pixels and the respective change amounts are supplied as training data for a neural network. This limitation has been identified as Mere Instructions to Apply an Exception (MPEP 2106.05(f)), Insignificant Extra Solution Activity (MPEP 2106.05(g)), and Field of Use and Technological Environment (MPEP 2106.05(h)). The courts have ruled limitations that amount to appending insignificant extra solution activity, using generic computing components as a tool to perform an existing process, and generally linking the use of the judicial exception to a particular technological environment or field of use does not integrate the judicial exception into a practical application. Step 2B: Because the limitation was identified as Insignificant Extra Solution Activity (MPEP 2106.05(g)), it requires further evaluation to determine if the limitation is beyond well understood, routine, and conventional activity. Under broadest reasonable interpretation, supplying value pairs encompasses transmitting data over a network. The courts have found the computer functions of receiving and transmitting data over a network to be computer functions that are well understood, routine, and conventional when claimed in a merely generic manner. The courts have found that limitations that amount to adding well understood, routine, and conventional activity, invoking the use of generic computing components as a tool to perform the judicial exception, and generally linking the use of the judicial exception to a particular technological environment and field of use are not enough to qualify the claim as significantly more than the abstract idea. Therefore, the claim does not include additional elements, alone or in the ordered combination that are sufficient to amount to significantly more than the recited judicial exception. This claim is not eligible subject matter under 35 U.S.C. 101. Claim 3 Step 1: Regarding dependent claim 3, the judicial exception of independent claim 1 is further incorporated. The claim falls within the corresponding statutory category as stated previously. Step 2A Prong 1: Claim 3 additionally recites the limitation and the global coordinates are transferred into a headlamp- specific coordinate system., which can reasonably be read to entail making an evaluation as to a change between global coordinates and headlamp-specific coordinates. This task can be performed within the human mind or using a pen and paper as an assistive physical aid. Therefore, this claim limitation includes the recitation of the judicial exception of abstract ideas of a mental process. Step 2A Prong 2: Claim 3 additionally recites the limitation wherein the spatial orientation in the virtual scene is done on the basis of a global three- dimensional coordinate system, This limitation has been identified as Field of Use and Technological Environment (MPEP 2106.05(h)).The courts have ruled generally linking the use of the judicial exception to a particular technological environment or field of use does not integrate the judicial exception into a practical application. Step 2B: The courts have found that limitations that amount to generally linking the use of the judicial exception to a particular technological environment and field of use are not enough to qualify the claim as significantly more than the abstract idea. Therefore, the claim does not include additional elements, alone or in the ordered combination that are sufficient to amount to significantly more than the recited judicial exception. This claim is not eligible subject matter under 35 U.S.C. 101. Claim 4 Step 1: Regarding dependent claim 4, the judicial exception of independent claim 1 is further incorporated. The claim falls within the corresponding statutory category as stated previously. Step 2A Prong 1: Claim 4 does not recite any additional judicial exceptions. Step 2A Prong 2: Claim 4 additionally recites the limitation wherein the virtual motor vehicle has at least one virtual environment camera and/or at least one virtual brightness sensor as at least one surroundings sensor and/or at least one virtual vehicle sensor for recording vehicle data, in particular acceleration and/or steering angle and/or yaw rate. This limitation has been identified as Field of Use and Technological Environment (MPEP 2106.05(h)). The courts have ruled generally linking the use of the judicial exception to a particular technological environment or field of use does not integrate the judicial exception into a practical application. Step 2B: The courts have found that limitations that amount to generally linking the use of the judicial exception to a particular technological environment and field of use are not enough to qualify the claim as significantly more than the abstract idea. Therefore, the claim does not include additional elements, alone or in the ordered combination that are sufficient to amount to significantly more than the recited judicial exception. This claim is not eligible subject matter under 35 U.S.C. 101. Claim 5 Step 1: Regarding dependent claim 5, the judicial exception of independent claim 1 is further incorporated. The claim falls within the corresponding statutory category as stated previously. Step 2A Prong 1: Claim 5 additionally recites the limitation analyzing the recorded vehicle data to determine a second group of pixels of the virtual pixel headlamp on the basis of the recorded vehicle data; and, which can reasonably be read to entail observing and making a judgment on the recorded vehicle data. This task can be performed within the human mind or using a pen and paper as an assistive physical aid. The claim further recites the limitation changing individual light intensities of discrete pixels in the second determined group of pixels of the virtual pixel headlamp according to the illumination rule which can be reasonably read to entail evaluating the illumination rule and making a judgment as to how the light intensities should be modified. This task can be performed within the human mind or using a pen and paper as an assistive physical aid. Therefore, this claim limitation includes the recitation of the judicial exception of abstract ideas of a mental process. Step 2A Prong 2: Claim 5 additionally recites the limitation recording virtual vehicle data by the at least one virtual sensor of the virtual motor vehicle. This limitation has been identified as Insignificant Extra Solution Activity (MPEP 2106.05(g)).The courts have ruled appending insignificant extra solution activity to the judicial exception does not integrate the judicial exception into a practical application. Step 2B: Under broadest reasonable interpretation, recording data entails receiving data over a network and storing data in memory. These computer functions have been recognized by the courts as well understood, routine, and conventional computer functions when claimed in a merely generic manner. The courts have found that limitations that amount to adding activities that are well understood, routine, and conventional activities to the judicial exception are not enough to qualify the claim as significantly more than the abstract idea. Therefore, the claim does not include additional elements, alone or in the ordered combination that are sufficient to amount to significantly more than the recited judicial exception. This claim is not eligible subject matter under 35 U.S.C. 101. Claim 6 Step 1: Regarding dependent claim 6, the judicial exception of independent claim 1 is further incorporated. The claim falls within the corresponding statutory category as stated previously. Step 2A Prong 1: Claim 6 does not recite any additional judicial exceptions. Step 2A Prong 2: Claim 6 additionally recites the limitation wherein the second group of pixels is a subset of the first group of pixels. This limitation has been identified as Field of Use and Technological Environment (MPEP 2106.05(h)).The courts have ruled generally linking the judicial exception to a particular technological environment or field of use does not integrate the judicial exception into a practical application. Step 2B: The courts have found that limitations that amount to generally linking the use of the judicial exception to a particular technological environment or field of use are not enough to qualify the claim as significantly more than the abstract idea. Therefore, the claim does not include additional elements, alone or in the ordered combination that are sufficient to amount to significantly more than the recited judicial exception. This claim is not eligible subject matter under 35 U.S.C. 101. Claim 7 Step 1: Regarding dependent claim 7, the judicial exception of independent claim 1 is further incorporated. The claim falls within the corresponding statutory category as stated previously. Step 2A Prong 1: Claim 7 additionally recites the and the number of repetitions of step j) either corresponds to the number of repetitions required until the obtained illumination satisfies the illumination rule or corresponds to the number of repetitions that is temporally possible under the clock rate before the next lined-up virtual scene is analyzed, depending on which condition applies earliest. which can reasonably be read to entail making a judgment as to when to stop the repetitions, based on an evaluation of the illumination rule or an evaluation against a clock. This task can be performed within the human mind or using a pen and paper as an assistive physical aid. Therefore, this claim limitation includes the recitation of the judicial exception of abstract ideas of a mental process. Step 2A Prong 2: Claim 7 additionally recites the limitation wherein the presentation of the virtual scenes one after the other is clocked such that the number of virtual scenes per second is predetermined. This limitation has been identified as Field of Use and Technological Environment (MPEP 2106.05(h)). The courts have ruled generally linking the use of the judicial exception to a particular technological environment does not integrate the judicial exception into a practical application. Step 2B: The courts have found that limitations that amount to generally linking the use of the judicial exception to a particular technological environment are not enough to qualify the claim as significantly more than the abstract idea. Therefore, the claim does not include additional elements, alone or in the ordered combination that are sufficient to amount to significantly more than the recited judicial exception. This claim is not eligible subject matter under 35 U.S.C. 101. Claim 8 Step 1: Regarding dependent claim 8, the judicial exception of independent claim 1 is further incorporated. The claim falls within the corresponding statutory category as stated previously. Step 2A Prong 1: Claim 8 additionally recites the wherein the individual light intensities are changed by respective change amounts based on multiplying each respective individual light intensity by a respective dimming factor, which can reasonably be read to entail evaluating individual light intensities with respect to a dimming factor. This task can be performed within the human mind or using a pen and paper as an assistive physical aid. Therefore, this claim limitation includes the recitation of the judicial exception of abstract ideas of a mental process. Furthermore, because this claim limitation explicitly recites “multiplying” in terms of numeric values, the claim is also considered to include the recitation of the judicial exception of abstract ideas of mathematical concepts (mathematical calculation). Step 2A Prong 2 & Step 2B: Claim 8 does not recite any additional elements that would integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception. This claim is not eligible subject matter under 35 U.S.C. 101. Claim 9 Step 1: Regarding dependent claim 9, the judicial exception of independent claim 1 is further incorporated. The claim falls within the corresponding statutory category as stated previously. Step 2A Prong 1: Claim 9 does not recite any additional judicial exceptions. Step 2A Prong 2: Claim 9 additionally recites the limitation wherein the actual pixel headlamp is controlled by storing value pairs from the group of pixels and respective change amounts of the discrete pixels in the group, integrating the stored value pairs on a control device, and retrieving the stored value pairs. This limitation has been identified as Insignificant Extra Solution Activity (MPEP 2106.05(g)) and Field of Use and Technological Environment (MPEP 2106.05(h)). The courts have ruled appending insignificant extra solution activity to the judicial exception and generally linking the judicial exception to a particular technological environment does not integrate the judicial exception into a practical application. Step 2B: Under broadest reasonable interpretation, the claims encompass storing information in memory, receiving and transmitting data over a network, and retrieving information in memory. These computer functions have been recognized by the courts as computer functions which are well understood, routine, and conventional when claimed in a merely generic manner. The courts have found that limitations that amount to adding well understood, routine, and conventional activities to the judicial exception are not enough to qualify the claim as significantly more than the abstract idea. Therefore, the claim does not include additional elements, alone or in the ordered combination that are sufficient to amount to significantly more than the recited judicial exception. This claim is not eligible subject matter under 35 U.S.C. 101. Claim 10 Step 1: Regarding dependent claim 10, the judicial exception of independent claim 1 is further incorporated. The claim falls within the corresponding statutory category as stated previously. Step 2A Prong 1: Claim 10 additionally recites the limitation wherein the illumination rule is determined by a desired two-dimensional distribution of the illuminance, which is dependent on a desired light function, in particular glare-free high beam and/or projection of lines and/or symbols onto the road. This claim can reasonably be read to entail evaluating a desired light function so as to inform a judgment of a desired distribution of illuminance to characterize the illumination rule. This task can be performed within the human mind or using a pen and paper as an assistive physical aid. Therefore, this claim limitation includes the recitation of the judicial exception of abstract ideas of a mental process. Step 2A Prong 2 & Step 2B: Claim 10 does not recite any additional elements that would integrate the judicial exception into a practical application nor amount to significantly more than the recited judicial exceptions. This claim is not eligible subject matter under 35 U.S.C. 101. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, and 3-10 are rejected under 35 U.S.C. 103 as being unpatentable over Johannes (DE102017211430A1), hereinafter referred to as Johannes, in view of Waldner et al (Waldner, M., Kramer, M., and Bertram, T., “Hardware-in-the-Loop-Simulation of the light distribution of automotive Matrix-LED-Headlights”, 2019, 2019 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM), pp. 1311-1316), hereinafter referred to as Waldner. Regarding claim 1, Johannes discloses (except the limitations surrounded by brackets ([[..]])) A method for configuring light functions of an actual pixel headlamp system comprising an actual pixel headlamp, wherein a two-dimensional distribution of the illuminance in an illuminable area of a scene illuminable by the actual pixel headlamp based on features of different regions of the illuminable area of the scene, comprising: A method is disclosed for controlling a pixel headlight of a motor vehicle system that includes at least one pixel headlight. The method uses images representative of light distribution. The distribution of light is dependent upon roadway features ((Johannes, ¶1) "The invention relates to a method for controlling a pixel headlight of a motor vehicle arranged on a roadway, in which the pixel headlight emits light depending on a control signal representing a sequence of images in order to illuminate the roadway at least partially, wherein each individual image of the sequence corresponds to a respective light distribution to be provided by the pixel headlight, the illuminated roadway is detected by means of a vehicle camera which provides corresponding camera data, and the camera data are evaluated in order to determine at least one indicative roadway feature and to provide the control signal for controlling the pixel headlight depending on the determined indicative roadway feature.") a) defining a [[virtual]] driving scenario, wherein the [[virtual]] driving scenario comprises a road and the road surroundings, wherein the road surroundings include vegetation, curbs, road signs, road markings, road users, and/or weather-related features; A vehicle environment is characterized by a plurality of features including the road, road edge and traffic signs ((Johannes, ¶26) "The vehicle environment preferably includes the roadway, which may also simply be a carriageway on which the motor vehicle is driven, a footpath located next to a carriageway, a road edge, traffic signs associated with the roadway, combinations thereof or the like. Even sections of the vehicle's route may be recorded."). The vehicle environment may also include features including stop signs, directional arrows, lane markings, and other signage ((Johannes, ¶34) "Based on the camera data relating to the reference image, the evaluation unit can, for example using an algorithm, search the camera data for known objects as indicative road features, such as stop signs, directional arrows, double lane markings (yellow/white), warning signs and/or speed indications."). The vehicle environment may comprise precipitation and ambient brightness, as weather conditions ((Johannes, ¶52) "For example, it may be possible to use the vehicle camera to detect ambient brightness or precipitation, and then activate the corresponding procedure."). The vehicle environment may further comprise other vehicles and road users ((Johannes, ¶32) "The evaluation unit allows the camera data to be analyzed and, for example, oncoming vehicles or other road users to be detected.") b) defining a [[virtual]] motor vehicle, wherein the [[virtual]] motor vehicle has a [[virtual pixel headlamp corresponding to the]] actual pixel headlamp and a [[virtual]] surroundings sensor for recording at least a portion of an illuminable area illuminable by the [[virtual]] pixel headlamp; A motor vehicle is characterized as a car ((Johannes, ¶3) "A motor vehicle of the type described is a vehicle which can be propelled by means of a drive device in its intended driving operation. The drive unit can be a drive unit that includes an internal combustion engine or an electric machine. Of course, combinations of these are also possible. The motor vehicle is preferably a car, in particular a passenger car. "). The motor vehicle is further characterized by having one or more pixel headlights and cameras that may capture information in the illuminated section of the road, which is illuminated by the pixel headlight ((Johannes, ¶4) "Modern motor vehicles have one or more pixel headlights, which can be used to provide highly flexible light distributions for illuminating the vehicle's surroundings, especially the road."); ((Johannes, ¶6) "Furthermore, modern motor vehicles include vehicle cameras to enable, for example, autonomous driving and/or to provide driver assistance systems with the necessary data"); ((Johannes, ¶24) "The vehicle camera can be, for example, a video camera, a photo camera and/or the like. Preferably, the vehicle camera has an electronic, in particular a digital, recording unit, so that camera data, preferably digital data, can be provided according to the captured vehicle environment. "); ((Johannes, ¶29) "At least the illuminated section of the road is recorded by the vehicle camera.") c) [[simulating]] a night drive of the [[virtual]] motor vehicle in the defined [[virtual]] driving scenario, with the [[virtual]] pixel headlamp switched on, by [[simulating]] successive [[virtual]] scenes, Driving is performed in darkness using the headlights, indicative of a night drive wherein the headlights are implied to be on ((Johannes, ¶2) "Motor vehicles have headlights, in particular motor vehicle headlights, by means of which the vehicle environment of the motor vehicle, in particular the roadway on which the motor vehicle is positioned, can be illuminated. The purpose of lighting is twofold: firstly, to ensure good visibility of the motor vehicle for other road users in unfavorable visibility conditions, especially in darkness, and secondly, to illuminate the roadway or carriageway, enabling the driver of the motor vehicle to drive safely on the roadway. Furthermore, the lighting to be provided by the motor vehicle is regulated by legal regulations and standards."). The driving vehicle continues to collect a sequence of reference images and camera as the vehicle is driven ((Johannes, ¶48) "It is also advantageous if the image sequence repeatedly includes reference images, with successive reference images being spaced apart from each other by at least about 0.5 seconds, preferably at least about 0.8 seconds.") [[wherein each virtual scene represents a still image from the simulated night drive together with the virtual motor vehicle in the defined virtual driving scenario;]] d) recording [[virtual]] surroundings data by the [[virtual]] surroundings sensor in a recordable portion, which is recordable by the [[virtual]] surroundings sensor of the illuminable area illuminable by the [[virtual]] pixel headlamp in at least one of the [[virtual]] scenes, The illuminated section of the road is recorded by a camera during the drive ((Johannes, ¶24) "The vehicle camera can be, for example, a video camera, a photo camera and/or the like. Preferably, the vehicle camera has an electronic, in particular a digital, recording unit, so that camera data, preferably digital data, can be provided according to the captured vehicle environment."); ((Johannes, ¶29) "At least the illuminated section of the road is recorded by the vehicle camera. Depending on the design, it may also be intended that the vehicle camera only records a predefined area of the illuminated vehicle path. Preferably, the vehicle camera may only capture a section of the illuminated area that is positioned in front of the vehicle in the direction of travel during normal driving operation. However, it may also be intended that the vehicle camera records the entire vehicle path. The vehicle camera can be designed as a single unit and positioned appropriately on the vehicle. The vehicle camera can also be designed with multiple components, so that it can, for example, selectively record the vehicle's path in different directions and provide corresponding camera data. ") e) analyzing the recorded [[virtual]] surroundings data to automatically identify a spatial selection region in the [[virtual]] scene, wherein the spatial selection region indicates a region in which illuminance is to be changed due to a predefined illumination rule dependent on features of different regions of the illuminable area of the scene; The recorded camera data is evaluated by the evaluation unit with regard for a predefined light distribution and a spatial area where another road user is located is identified such that the sequence images that dictate the control of the headlight account for such objects so as to modify the headlight output ((Johannes, ¶30-32) "Furthermore, an evaluation unit is provided that receives the camera data from the vehicle camera and evaluates it, for example, taking into account a predefined light distribution. This allows the control signal for controlling the pixel spotlight to be determined and provided. The specified light distribution is a light distribution that is to be provided, for example, as the target light distribution by means of the pixel spotlight. The specified light distribution can be provided by a higher-level vehicle control system, a control element that can be operated manually by the driver of the motor vehicle, and/or the like. For example, the specified light distribution can represent high beam, low beam and/or the like. The evaluation unit allows the camera data to be analyzed and, for example, oncoming vehicles or other road users to be detected. If other road users are detected, it may be possible to modify subsequent images in the image sequence in such a way that a spatial angle in which the other road user is located is hidden or unhidden. ") f) determining a group of pixels of the [[virtual]] pixel headlamp that are affected based on the identified spatial selection region, and changing individual light intensities of discrete affected pixels in the determined group of pixels of the [[virtual]] pixel headlamp according to the illumination rule; Pixels corresponding to areas of interest identified by the evaluation of the camera data are modified reduce light intensity to avoid glare for an oncoming object and subsequently the pixels can be reactivated ((Johannes, ¶67-68) "A first image is formed by the reference image 16. While this reference image 16 serves to control the pixel headlight 10, 12, the vehicle camera captures the vehicle's surroundings and provides camera data 18 (Fig. 4). Based on the camera data 18, the following images 20 are determined, in which the area 24 is reduced with regard to light intensity in order to glare away or block out an oncoming object. The six images 20, which follow each other immediately, thus control the pixel spotlight 10, 12 with regard to light output for the next six cycles or frames. Further evaluation of the camera data 18, in particular taking into account area 26 (Fig. 4 ) areas 52 are also hidden or de-glared where reduced lighting is desired to reduce glare due to ice or water. This is achieved with the images 22 that follow images 20. A subsequent image 54 shows that the fade-out is reduced in areas 24 and 52. In these areas, the pixels are thus activated again to partially emit light.") g) re-recording [[virtual]] surroundings data by the [[virtual]] surroundings sensor in the recordable portion; Additional camera data is collected as the vehicle drives through the environment, wherein successive reference images are received for processing and the vehicle camera may be synchronized with the reference images to provide recordings. ((Johannes, ¶48) "It is also advantageous if the image sequence repeatedly includes reference images, with successive reference images being spaced apart from each other by at least about 0.5 seconds, preferably at least about 0.8 seconds."); ((Johannes, ¶19) "With regard to a generic method, it is particularly proposed that the image sequence is provided with an image sequence frequency of greater than approximately 24 Hz, preferably greater than approximately 90 Hz, and particularly preferably greater than approximately 100 Hz, wherein the image sequence includes at least one reference image for uniformly illuminating the driving path, the vehicle camera is synchronized to capture the driving path with respect to the reference image, the at least one indicating driving path feature and the information data associated with the indicating driving path feature are determined from the provided camera data with respect to the reference image, the associated information data are compared with data from a database, and the control signal is determined depending on the comparison."); ((Johannes, ¶23) "To enable the vehicle's surroundings to be captured during the active reference image, the vehicle camera is synchronized accordingly."); ((Johannes, ¶24) "The vehicle camera can be, for example, a video camera, a photo camera and/or the like. Preferably, the vehicle camera has an electronic, in particular a digital, recording unit, so that camera data, preferably digital data, can be provided according to the captured vehicle environment. ") h) analyzing the re-recorded [[virtual]] surroundings data as to whether an obtained light intensity satisfies the illumination rule in the spatial selection region; and The camera data is provided to the evaluation unit for evaluation to determine if the lighting in a respective is insufficient ((Johannes, ¶33) "The vehicle camera, which is synchronized with respect to the reference image and captures the vehicle environment synchronously with respect to the reference image, delivers corresponding camera data to the evaluation unit. The evaluation unit can then use the camera data to determine details that would otherwise not be recognizable in the images of the image sequence, for example because the lighting in the respective area is insufficient, or because a corresponding area cannot be captured due to the vehicle camera being overloaded by excessive light exposure. In this way it is possible, for example, to identify road surfaces that are covered with ice or water. Furthermore, additional vehicle details can also be determined, for example from oncoming or preceding vehicles. Furthermore, it is of course possible to adjust the control signal accordingly by adapting images following the reference image, taking into account the insights gained from the evaluation. This makes it possible, for example, to illuminate a section of the road covered with ice or water less intensely, so that the driver of the motor vehicle or other road users are not blinded as much as possible.") performing one of the following steps based on whether or not the obtained light intensity satisfies the illumination rule: An evaluation is performed on camera data to determine sufficiency of the lighting for a given desired distribution ((Johannes, ¶61-62) "An evaluation of the camera data 18 is shown, according to which the camera data 18 detect an area 24 where the illumination is too high, so that glare may occur. Furthermore, the camera data 18 shows that an area 26 was identified where the lighting is insufficient. Contrasts in areas 24 and 26 cannot be adequately detected. In the remaining area 50, normal illumination within a permissible range was recorded. The camera data 18 are evaluated by means of an evaluation unit which is also not shown, taking into account a given light distribution, here a high beam, in order to determine and provide the control signal for controlling the pixel headlight 10 , 12. ") i) generating [[and storing]] value pairs for a control to be created based on the obtained light intensity satisfying the illumination rule, wherein the value pairs are formed from the group of pixels and respective change amounts for the discrete pixels in the group; or Control signals are generated for each pixel in the pixel headlight according to a sequence of images that contains pixel-value pairs corresponding to appropriate control ((Johannes, ¶25) " The pixel spotlight is a spotlight that has a plurality of matrix-like arranged, individually controllable pixels that can be controlled in a suitable manner to adjust the light output of the pixel spotlight according to the current image of the image sequence in accordance with the control signal. A pixel of the pixel spotlight therefore preferably represents an essentially point-shaped light source. The light source can be, for example, a light-emitting diode, but also, in principle, a gas discharge lamp, an incandescent lamp and/or the like. These light sources can be combined into a matrix, which may also include a headlight control system by means of which the individual light sources can be controlled in a corresponding manner according to the control signal. The pixel spotlight can also include a laser light source similar to a laser scanner, which is controlled accordingly to provide a light distribution in accordance with the control signal. "). The control signal is determined by accounting for the predefined light distribution as an illumination rule ((Johannes, ¶30) " Furthermore, an evaluation unit is provided that receives the camera data from the vehicle camera and evaluates it, for example, taking into account a predefined light distribution. This allows the control signal for controlling the pixel spotlight to be determined and provided. "). The image pixels correspond to the light output value as a change amount ((Johannes, ¶47) " The pixel spotlight is a spotlight that has a plurality of matrix-like arranged, individually controllable pixels that can be controlled in a suitable manner to adjust the light output of the pixel spotlight according to the current image of the image sequence in accordance with the control signal.") j) determining a new group of pixels of the [[virtual]] pixel headlamp that are affected based on the identified spatial selection region, wherein the new group differs from the previously determined group at least on account of one pixel, and/or further changing the individual light intensities of the discrete pixels of the [[virtual]] pixel headlamp according to the illumination rule, wherein a change amount for at least one pixel differs from the change amount for the at least one pixel in the previously determined group, and repeating steps g), h), and i) or j). Figure 5 depicts a group of pixels that correspond to an area having reduced light intensity at item 24 and shows an image later in the sequence having a second group of pixels that correspond to an area with a desired reduced lighting. PNG media_image1.png 319 559 media_image1.png Greyscale The areas of interest are modified at different times to reduce light output (24) or hide light output (52) and then later both activated to partially emit light ((Johannes, ¶67-68) " Based on the camera data 18, the following images 20 are determined, in which the area 24 is reduced with regard to light intensity in order to glare away or block out an oncoming object. The six images 20, which follow each other immediately, thus control the pixel spotlight 10, 12 with regard to light output for the next six cycles or frames. Further evaluation of the camera data 18, in particular taking into account area 26 (Fig. 4) areas 52 are also hidden or de-glared where reduced lighting is desired to reduce glare due to ice or water. This is achieved with the images 22 that follow images 20. A subsequent image 54 shows that the fade-out is reduced in areas 24 and 52. In these areas, the pixels are thus activated again to partially emit light."); ((Johannes, ¶47) " In this configuration, the images following the reference image in the image sequence are adjusted accordingly based on the evaluation of the camera data during the reference image, in order to improve the illumination of the roadway, especially the road surface. This makes it possible to better illuminate specific areas of the road that are very bright and may dazzle the driver or the vehicle camera, or areas where visibility is poor due to insufficient lighting, by adjusting the images and consequently also adjusting the light output of the pixel headlight."). The method is performed continuously along the drive of the vehicle to obtain information about a dynamic environment, wherein new reference images are received intermittently and the adaptation of the headlight control is continuously performed per sets of image sequences, thereby indicating repetition of the steps of the method ((Johannes, ¶37) "The control unit thus adapts static guidance road features of the roadway to the dynamic information from the sensors and data, thereby eliminating the information difference between static guidance road features and the actual environment.");((Johannes, ¶48) " It is also advantageous if the image sequence repeatedly includes reference images, with successive reference images being spaced apart from each other by at least about 0.5 seconds, preferably at least about 0.8 seconds. It has been shown that with such a distance between the reference images, impairment of the driver and/or other road users can be largely avoided, while at the same time ensuring reliable functionality according to the invention. Alternatively or additionally, it can also be provided that the temporal distance between the reference images depends on the vehicle speed of the motor vehicle. It may be provided that the reference images follow each other in time with a small interval at high vehicle speeds, whereas at low vehicle speeds, for example when maneuvering or the like, the time interval between them may be increased. In addition, other vehicle parameters can of course be taken into account in order to adjust the time interval between successive reference images. The reference images do not need to follow each other at equidistant intervals in time; furthermore, it can be provided that the time interval between the reference images varies. ") Johannes does not disclose the utilization of a simulation that would contain the virtualized components of the method, does not disclose a virtual pixel headlamp corresponding to an actual pixel headlamp, does not disclose virtual scenes representing a still image from the simulation with the vehicle in the scenario, and does not explicitly disclose the storage of value pairs for control. However; Waldner discloses a virtual driving scenario ((Waldner, Page 1311, Col 2, ¶1) " The solution is using the complete real headlight system in a virtual testing scenario. With the HiL-simulation the engineer can evaluate real headlights in predefined and repeatable scenarios at any time in the lab. The virtual test scenarios can be reproductions form real test drives or worst case analysis for specialized applications."); ((Waldner, Page 1312, Col 2, ¶2) " A traffic model simulates other road users in the virtual environment in order to create test scenarios for the headlight. The scenarios can also include different lighting, environment and weather conditions to simulate all important test scenarios.") a virtual motor vehicle ((Waldner, Page 1312, Col 1, ¶2) " It also measures the dynamics of the simulated ego vehicle, which are necessary for the light functions, for example the steering wheel for cornering light [8].") a virtual pixel headlamp corresponding to the actual pixel headlamp A virtual light is represented in a corresponding way to a real headlight within the simulation ((Waldner, Page 1313, Col 1, ¶1) "The vehicle dynamics simulation sets the origin and the direction of the light source to move the virtual light like the real headlight."); ((Waldner, Page 1311, Col 2, ¶3- Page 1312, Col 1, ¶1) "The virtual headlight representation is shown in section III. Section IV presents the adjustment process of the image processing system step-by-step. The HiL-system is evaluated in section V by comparing the virtual light distribution with a real one from a matrix-headlight.") a virtual surroundings sensor ((Waldner, Page 1312, Col 1, ¶2) "In the virtual environment a virtual sensor system approximates the camera system by scanning the area in front of the ego vehicle.") simulating a night drive ((Waldner, Page 1131, Col 1, Abstract) " A virtual testing simulation visualizes the digitized light distribution under consideration of the actual driving dynamics.")((Waldner, Page 1311, Col 2, ¶1) " By using the presented HiL-simulation the duration and number of necessary night test drives can be reduced. Also imperfections and faults can be found faster and earlier in the development process.") virtual scenes See at least Figures 10 and 11 ((Waldner, Page 1316 Col 1, ¶2) " The next evaluation scenario is activating light functions in a test facility.") wherein each virtual scene represents a still image from the simulated night drive together with the virtual motor vehicle in the defined virtual driving scenario The simulation includes the vehicle dynamics of an ego vehicle in a virtual environment ((Waldner, Page 1312, Col 1, ¶4 – Col 2, ¶1) " A vehicle dynamic simulation calculates the headlight-trajectory to make the evaluation of the influence of driving dynamics on light distribution possible."). The simulation is further characterized by the environment experienced by the vehicle while driving as the scenario ((Waldner, Page 1312, Col 2, ¶2) " A traffic model simulates other road users in the virtual environment in order to create test scenarios for the headlight. The scenarios can also include different lighting, environment and weather conditions to simulate all important test scenarios. The virtual sensors of the ego vehicle measure the current situation to close the test loop."); See at least Figures 10 and 11 showing still images taken from test scenarios. ((Waldner, Page 1316 Col 1, ¶2) " The next evaluation scenario is activating light functions in a test facility."). The simulation is further described in terms of frames per second, wherein a frame is understood to be a still image of each discretized part of the simulation ((Waldner, Page 1315, Col 2, ¶3) "The simulation runs faster than 60 fps, the video stream from the cameras is at 30 fps and the delay in image processing is below 50 ms, so the approach is real-time capable.") and storing value pairs Luminous intensity values for each light source in the pixel matrix are calculated and stored for control purposes ((Waldner, Page 1312, Col 1, ¶2) "The control algorithm adapts the light distribution on the collected data and calculates the individual luminous intensity I_v,I for each light source."); ((Waldner, Page 1313, Col 1, ¶2) "The luminous intensity values Iv,i are stored in an intensity matrix Iv in Rm*n.") Johansen is analogous to the claimed invention because it is related to the same field of endeavor of pixel-based headlight control. Waldner is similarly analogous to the claimed invention in that it is related to the same field of endeavor of pixel-based headlight control, particularly by including simulations. It would have been obvious to one of ordinary skill to which said subject matter pertains at the time the invention was filed to have incorporated the simulation aspect of Waldner into the methodology of Johansen because some teaching, suggestion, or motivation in the prior art would have led one having ordinary skill in the art to do so in order to arrive at the claimed invention. Johansen discloses a methodology to be performed in physical use scenarios of night driving while Waldner discloses leveraging simulated scenarios of night driving to optimize control of pixel-based headlights. Waldner particularly notes that using a simulation-based approach reduces the duration and number of necessary night test drives needed for attaining such controls and further notes that imperfections and faults can be found earlier in the development process ((Waldner, Page 1311, Col 2, ¶1) "The solution is using the complete real headlight system in a virtual testing scenario. With the HiL-simulation the engineer can evaluate real headlights in predefined and repeatable scenarios at any time in the lab. The virtual test scenarios can be reproductions form real test drives or worst case analysis for specialized applications. In the HiLtest the real headlight can be exposured to heat, cold or water to evaluate the effects of environmental conditions. By using the presented HiL-simulation the duration and number of necessary night test drives can be reduced. Also imperfections and faults can be found faster and earlier in the development process."). Accordingly, the combination of references would have been obvious to one having skill in the art so as to achieve the purported benefits. Regarding claim 3, the proposed combination discloses The method according to claim 1, as stated previously. The proposed combination in further view of Waldner discloses wherein the spatial orientation in the virtual scene is done on the basis of a global three- dimensional coordinate system, and the global coordinates are transferred into a headlamp- specific coordinate system. The world coordinate system of the simulated environment comprising a point defined by the directions x, y and z is converted to a texture coordinate system, which is the coordinate system of the light. ((Waldner, Page 1313, Col 1, ¶2) "The first step of the light simulation is converting a point pW = (xW yW zW)T from world coordinate system W into the point pT = (xT yT wT)T in the texture coordinate system T . T is also the light coordinate system because the texture spreads in the light direction like a spherical light distribution, which shows fig. 4. The coordinate transform WTT 2 R4_4 from W to T uses homogeneous coordinates:") It would have been further obvious to one of ordinary skill to which said subject matter pertains at the time the invention was filed to have further modified the proposed combination because some teaching, suggestion, or motivation in the prior art references would have led one to make the modification. Waldner discloses that projective texture mapping is leveraged because it is real-time capable and luminous intensity distributions can be used, and further describes the texture mapping in terms of converting real-world coordinates into the texture coordinate system ((Waldner, Page 1313, Col 1, ¶1) "The used method for the light simulation is projective Texture-Mapping [5], [10] because it is real-time capable and LIDs can be used. The vehicle dynamics simulation sets the origin and the direction of the light source to move the virtual light like the real headlight."). Accordingly, the combination would have been obvious to achieve the real-time benefits. Regarding claim 4, the proposed combination discloses The method according to claim 1, as stated previously. The proposed combination in further view of Johannes discloses (except the limitations surrounded by brackets ([[..]])) wherein the [[virtual]] motor vehicle has at least one [[virtual]] environment camera and/or at least one [[virtual]] brightness sensor as at least one surroundings sensor and/or at least one [[virtual]] vehicle sensor for recording vehicle data, in particular acceleration and/or steering angle and/or yaw rate. The vehicle has a camera ((Johannes, ¶24) "The vehicle camera can be, for example, a video camera, a photo camera and/or the like. Preferably, the vehicle camera has an electronic, in particular a digital, recording unit, so that camera data, preferably digital data, can be provided according to the captured vehicle environment."). The vehicle further has a mechanism by which to acquire data from the vehicle including steering angle and speed ((Johannes, ¶12) "The headlight or its headlight control is controlled by means of a control signal, taking into account data from the vehicle, such as steering angle, speed and fixed programmed values such as vehicle width and/or the like.") The dynamics information is obtained from sensors ((Johannes, ¶37) "The control unit thus adapts static guidance road features of the roadway to the dynamic information from the sensors and data, thereby eliminating the information difference between static guidance road features and the actual environment."). Speed of the vehicle is known and can be evaluated as part of an algorithm used for tracking the movement, wherein the tracking is considered to be a record of such parameters ((Johannes, ¶79) "The program can implement a suitable algorithm. Since the speed of the motor vehicle is known and the position of the symbols or markings can be evaluated at high frequency using the vehicle camera from the images of image sequence 14, it is possible to track the movement and outlines of the symbols or markings within the driver's field of vision. ") The proposed combination in further view of Johannes does not disclose the virtual components of the claim. However, the proposed combination in further view of Waldner discloses simulated components as stated previously the virtual motor vehicle ((Waldner, Page 1312, Col 1, ¶2) "It also measures the dynamics of the simulated ego vehicle, which are necessary for the light functions, for example the steering wheel for cornering light [8].") at least one virtual environment camera ((Waldner, Page 1312, Col. 1, ¶2) "In the virtual environment a virtual sensor system approximates the camera system by scanning the area in front of the ego vehicle.") at least one virtual vehicle sensor ((Waldner, Page 1312, Col 1, ¶2) "The sensor system generates an object list from the road users. It also measures the dynamics of the simulated ego vehicle, which are necessary for the light functions, for example the steering wheel for cornering light [8].") Regarding claim 5, the proposed combination discloses The method according to claim 1, further comprising: as stated previously. The proposed combination in further view of Johannes discloses (except the limitations surrounded by brackets ([[..]])) recording [[virtual]] vehicle data by the at least one [[virtual]] sensor of the [[virtual]] motor vehicle; Data from the vehicle is obtained ((Johannes,¶12) "The headlight or its headlight control is controlled by means of a control signal, taking into account data from the vehicle, such as steering angle, speed and fixed programmed values such as vehicle width and/or the like."). The dynamics information is obtained from sensors ((Johannes, ¶37) "The control unit thus adapts static guidance road features of the roadway to the dynamic information from the sensors and data, thereby eliminating the information difference between static guidance road features and the actual environment."). Speed of the vehicle is known and can be evaluated as part of an algorithm used for tracking the movement, wherein the tracking is considered to be a record of such parameter ((Johannes, ¶79) "The program can implement a suitable algorithm. Since the speed of the motor vehicle is known and the position of the symbols or markings can be evaluated at high frequency using the vehicle camera from the images of image sequence 14, it is possible to track the movement and outlines of the symbols or markings within the driver's field of vision. "). Information such as sensor data and vehicle position may be stored in a database ((Johannes, ¶35) "This database contains data from the internet, GPS data, vehicle camera data, traffic sign analysis, traffic radio, other sensor data, information on the vehicle's position and/or the like. Depending on the comparison, the control signal is then determined using the control signal unit, which is coupled to the evaluation unit via communication technology"). analyzing the recorded vehicle data to determine a second group of pixels of the [[virtual]] pixel headlamp on the basis of the recorded vehicle data; and Dynamic information from sensor data is considered by the control unit for determining control ((Johannes, ¶37) " The control unit thus adapts static guidance road features of the roadway to the dynamic information from the sensors and data, thereby eliminating the information difference between static guidance road features and the actual environment."). Dynamics data such as speeds affect the supply of reference images, thereby indicating that any analysis done on camera images with regard to the reference images is associated (as the basis) of the vehicle dta ((Johannes, ¶48) "Alternatively or additionally, it can also be provided that the temporal distance between the reference images depends on the vehicle speed of the motor vehicle. It may be provided that the reference images follow each other in time with a small interval at high vehicle speeds, whereas at low vehicle speeds, for example when maneuvering or the like, the time interval between them may be increased. In addition, other vehicle parameters can of course be taken into account in order to adjust the time interval between successive reference images."). Areas of interest can be identified for a multitude of scenarios, wherein each image sequence for accounts for the data from the vehicle, as stated previously, thereby indicating infinite identifiable groups of pixels depending on the current environment dynamics ((Johannes, ¶79) " The evaluation unit now reacts dynamically to the road symbols. This can include a program-controlled computing unit. The program can implement a suitable algorithm. Since the speed of the motor vehicle is known and the position of the symbols or markings can be evaluated at high frequency using the vehicle camera from the images of image sequence 14, it is possible to track the movement and outlines of the symbols or markings within the driver's field of vision. Unwanted and incorrect information can now be hidden, changed, or reduced in the remaining or subsequent images. Useful information can be clarified. "); Furthermore, each scenario contains a sequence of images that correlate to control where at least two subsequent images in the sequence occur, thereby indicating that a second group may be identified within the same sequence- See at least Figs 5-9 depicting a variety of identified groups of pixels for modification. changing individual light intensities of discrete pixels in the second determined group of pixels of the [[virtual]] pixel headlamp according to the illumination rule. Individual pixels of the headlight are controlled according to the image sequence, which contains second pixel set as depicted in at least Figs 5-9 ((Johannes, ¶25) "The pixel spotlight is a spotlight that has a plurality of matrix-like arranged, individually controllable pixels that can be controlled in a suitable manner to adjust the light output of the pixel spotlight according to the current image of the image sequence in accordance with the control signal. A pixel of the pixel spotlight therefore preferably represents an essentially point-shaped light source. The light source can be, for example, a light-emitting diode, but also, in principle, a gas discharge lamp, an incandescent lamp and/or the like. These light sources can be combined into a matrix, which may also include a headlight control system by means of which the individual light sources can be controlled in a corresponding manner according to the control signal."); ((Johannes, ¶47) "In this configuration, the images following the reference image in the image sequence are adjusted accordingly based on the evaluation of the camera data during the reference image, in order to improve the illumination of the roadway, especially the road surface. This makes it possible to better illuminate specific areas of the road that are very bright and may dazzle the driver or the vehicle camera, or areas where visibility is poor due to insufficient lighting, by adjusting the images and consequently also adjusting the light output of the pixel headlight. Of course, the invention does not need to be limited to a single area; several over- or underexposed areas can be identified simultaneously."). The image sequence is provided based on a desired illumination ((Johannes, ¶25) "Furthermore, the pixel spotlight can of course include other optically active elements that are able to adapt the light emitted by the individual light sources of the pixel spotlight in the desired way to emit light according to the light distribution, for example refractory elements such as lenses, prisms and/or the like, reflective elements such as mirrors, in particular micromirrors, DMDs (Digital Mirror Devices), combinations thereof and/or the like."); ((Johannes, ¶31-32) " The specified light distribution is a light distribution that is to be provided, for example, as the target light distribution by means of the pixel spotlight. The specified light distribution can be provided by a higher-level vehicle control system, a control element that can be operated manually by the driver of the motor vehicle, and/or the like. For example, the specified light distribution can represent high beam, low beam and/or the like. The evaluation unit allows the camera data to be analyzed and, for example, oncoming vehicles or other road users to be detected. If other road users are detected, it may be possible to modify subsequent images in the image sequence in such a way that a spatial angle in which the other road user is located is hidden or unhidden ") The proposed combination in further view of Johannes does not disclose the virtual nature of the configuration, as stated above. However, Johannes is relied upon to teach these features virtual vehicle data ((Waldner, Page 1312, Col 1, ¶2) "The sensor system generates an object list from the road users. It also measures the dynamics of the simulated ego vehicle, which are necessary for the light functions, for example the steering wheel for cornering light [8].") virtual sensor ((Waldner, Page 1312, Col 1, ¶2) "The sensor system generates an object list from the road users. It also measures the dynamics of the simulated ego vehicle, which are necessary for the light functions, for example the steering wheel for cornering light [8].") virtual motor vehicle ((Waldner, Page 1312, Col 1, ¶2) "It also measures the dynamics of the simulated ego vehicle, which are necessary for the light functions, for example the steering wheel for cornering light [8].") virtual pixel headlamp ((Waldner, Page 1313, Col 1, ¶1) "The vehicle dynamics simulation sets the origin and the direction of the light source to move the virtual light like the real headlight."); ((Waldner, Page 1311, Col 2, ¶3- Page 1312, Col 1, ¶1) "The virtual headlight representation is shown in section III. Section IV presents the adjustment process of the image processing system step-by-step. The HiL-system is evaluated in section V by comparing the virtual light distribution with a real one from a matrix-headlight.") Regarding claim 6, the proposed combination discloses The method according to claim 5, as stated previously. The proposed combination in further view of Johannes discloses wherein the second group of pixels is a subset of the first group of pixels. Figure 5 depicts a sequence of images that correspond to the control of the pixels of the headlights, where subsequent images contain subsets of the preceding image for control, as described in the annotated Fig.5 below, annotations provided by the examiner for explanation. PNG media_image2.png 440 535 media_image2.png Greyscale . See also Figures 6-9 as further examples. Regarding claim 7, the proposed combination discloses The method according to claim 1, as stated previously. The proposed combination in further view of Johannes discloses (except the limitations surrounded by brackets ([[..]])) [[wherein the presentation of the virtual scenes one after the other is clocked such that the number of virtual scenes per second is predetermined]] and the number of repetitions of step j) either corresponds to the number of repetitions required until the obtained illumination satisfies the illumination rule or corresponds to the number of repetitions that is temporally possible under the clock rate before the next lined-up [[virtual]] scene is analyzed, depending on which condition applies earliest. The procedure is stopped according to satisfaction the conditions which started the procedure in the first place, thereby indicating that the iterative analysis and sequence of illuminance changes ceases when the illumination condition is met ((Johannes, ¶52) "In this way it is possible to use the procedure in an optimized way, especially when there are unfavorable lighting conditions and visibility is impaired, particularly for the driver of the motor vehicle or other road users. For example, it may be possible to use the vehicle camera to detect ambient brightness or precipitation, and then activate the corresponding procedure. Accordingly, the process can also be deactivated again if an improvement in visibility conditions has been determined using the vehicle camera or if the corresponding conditions that activated the process according to the invention no longer apply."); ((Johannes, ¶69) "Depending on requirements, additional reference images 16 may be provided in the image sequence 14. In the present embodiment, it is provided that every thirtieth image is a reference image 16. Of course, the frame rate or the time interval of the reference images 16 can also be varied as needed in order to adapt the procedure to current requirements in a suitable manner."). Reference images indicative of the environment are provided intermittently and an evaluation procedure is triggered per each active reference image to collect camera data, thereby indicating that he reference image corresponds to a new view of the environment and the procedure is restarted per each active reference image as a finite duration for evaluation of camera data ((Johannes, ¶23) "To enable the vehicle's surroundings to be captured during the active reference image, the vehicle camera is synchronized accordingly."); ((Johannes, ¶34) "Based on the camera data relating to the reference image, the evaluation unit can, for example using an algorithm, search the camera data for known objects as indicative road features, such as stop signs, directional arrows, double lane markings (yellow/white), warning signs and/or speed indications."); ((Johannes, ¶47) "It is further proposed that the images following the reference image in the image sequence be determined depending on the at least one over- or underexposed area. In this configuration, the images following the reference image in the image sequence are adjusted accordingly based on the evaluation of the camera data during the reference image, in order to improve the illumination of the roadway, especially the road surface. This makes it possible to better illuminate specific areas of the road that are very bright and may dazzle the driver or the vehicle camera, or areas where visibility is poor due to insufficient lighting, by adjusting the images and consequently also adjusting the light output of the pixel headlight. Of course, the invention does not need to be limited to a single area; several over- or underexposed areas can be identified simultaneously. "); ((Johannes, ¶48) "It is also advantageous if the image sequence repeatedly includes reference images, with successive reference images being spaced apart from each other by at least about 0.5 seconds, preferably at least about 0.8 seconds. It has been shown that with such a distance between the reference images, impairment of the driver and/or other road users can be largely avoided, while at the same time ensuring reliable functionality according to the invention. Alternatively or additionally, it can also be provided that the temporal distance between the reference images depends on the vehicle speed of the motor vehicle. It may be provided that the reference images follow each other in time with a small interval at high vehicle speeds, whereas at low vehicle speeds, for example when maneuvering or the like, the time interval between them may be increased. In addition, other vehicle parameters can of course be taken into account in order to adjust the time interval between successive reference images. The reference images do not need to follow each other at equidistant intervals in time; furthermore, it can be provided that the time interval between the reference images varies. "). The proposed combination in further view of Johannes does not particularly disclose a clock rate for the simulation because Johannes does not disclose a simulation. Accordingly, Waldner is relied upon to disclose wherein the presentation of the virtual scenes one after the other is clocked such that the number of virtual scenes per second is predetermined Frames per second of the simulation is known to be greater than 60 fps as a predetermined value ((Waldner, page 1315, Col 2, ¶3) "The simulation runs faster than 60 fps, the video stream from the cameras is at 30 fps and the delay in image processing is below 50 ms, so the approach is real-time capable. ") virtual scene The simulation is further described in terms of frames per second, wherein a frame is understood to be a still image of each discretized part of the simulation ((Waldner, Page 1315, Col 2, ¶3) "The simulation runs faster than 60 fps, the video stream from the cameras is at 30 fps and the delay in image processing is below 50 ms, so the approach is real-time capable.") It would have been obvious to one of ordinary skill to which said subject matter pertains at the time the invention was filed to have further modified the proposed combination to include a predetermined frame rate for the simulation because some teaching, suggestion, or motivation in the prior art would have led one having skill in the art to make the modification in order to arrive at the claimed invention. The image sequence frequency per Johannes has preferred values that reflect desirable speeds for accuracy and processing purposes ((Johannes, ¶19) " With regard to a generic method, it is particularly proposed that the image sequence is provided with an image sequence frequency of greater than approximately 24 Hz, preferably greater than approximately 90 Hz, and particularly preferably greater than approximately 100 Hz, wherein the image sequence includes at least one reference image for uniformly illuminating the driving path, the vehicle camera is synchronized to capture the driving path with respect to the reference image, the at least one indicating driving path feature and the information data associated with the indicating driving path feature are determined from the provided camera data with respect to the reference image, the associated information data are compared with data from a database, and the control signal is determined depending on the comparison."). Setting a frame rate for the simulation to a specified value enables the turning clock rates for corresponding virtual sensors and processing which would, by virtue, rely on the simulation’s parameters. Knowing the frame rate further enables precise tuning such that real-time approaches can be implemented, as disclosed by Waldner ((Waldner, Page 1315, Col 2, ¶3) "The simulation runs faster than 60 fps, the video stream from the cameras is at 30 fps and the delay in image processing is below 50 ms, so the approach is real-time capable.") Regarding claim 8, the proposed combination discloses The method according to claim 1, as stated previously. The proposed combination in further view of Johannes discloses (except the limitations surrounded by brackets ([[..]])) wherein the individual light intensities are changed by respective change amounts [[based on multiplying each respective individual light intensity by a respective dimming factor.]] The control of the pixel headlights is described as being adaptable to change values of certain areas of the headlight per a desired intensity ((Johannes, ¶33) "Furthermore, it is of course possible to adjust the control signal accordingly by adapting images following the reference image, taking into account the insights gained from the evaluation. This makes it possible, for example, to illuminate a section of the road covered with ice or water less intensely, so that the driver of the motor vehicle or other road users are not blinded as much as possible."); ((Johannes, ¶36) "If there is no match, adjusting the images in the image sequence in certain areas can make the indicative road features, such as the road markings, less visually apparent. It is also possible to actively "overlay" or cross over the directional road markings with other projected markings."); ((Johannes, ¶25) "The pixel spotlight is a spotlight that has a plurality of matrix-like arranged, individually controllable pixels that can be controlled in a suitable manner to adjust the light output of the pixel spotlight according to the current image of the image sequence in accordance with the control signal."); ((Johannes, ¶47) "In this configuration, the images following the reference image in the image sequence are adjusted accordingly based on the evaluation of the camera data during the reference image, in order to improve the illumination of the roadway, especially the road surface. This makes it possible to better illuminate specific areas of the road that are very bright and may dazzle the driver or the vehicle camera, or areas where visibility is poor due to insufficient lighting, by adjusting the images and consequently also adjusting the light output of the pixel headlight. Of course, the invention does not need to be limited to a single area; several over- or underexposed areas can be identified simultaneously.") The proposed combination in further view of Johannes does not particularly disclose establishing the change amounts by any particular mathematical calculation. However, the proposed combination in further view of Waldner discloses setting the illuminance for each point based on multiplying each respective individual light intensity by a respective dimming factor A distance function is multiplied by intensity values for each point where the distance function may be the inverse square law, wherein one having skill in the art would understand that the inverse square law describes how light intensity decreases proportionally to the inverse square of the distance from the source as a dimmed value ((Waldner, Page 1313, Col 1, ¶2) " The illuminance Ev(pW) at the point pW is approximately the multiplication of the intensity and th distance function fd(pW; pL,O) between pW and the origin pL,O of a point light [5], [9]. The inverse square law is a possible function for fd.") It would have been obvious to one of ordinary skill to which said subject matter pertains at the time the invention was filed to have further modified the prior art references to incorporate the multiplication by a dimming factor as disclosed by Waldner into the change values for the light intensity of the pixels as disclosed by Johannes because some teaching, suggestion, or motivation would have led one having skill to do so in order to arrive at the claimed invention. Johannes discloses modifying the light intensity of the pixels of the headlight but does not provide a particular mechanism by which to do so and Waldner explicitly provides a mechanism by which to achieve illuminance values for each pixel using a mathematical function. Regarding claim 9, the proposed combination discloses The method according to claim 1, as stated previously. The proposed combination in further view of Johannes wherein the actual pixel headlamp is controlled by [[storing]] value pairs from the group of pixels and respective change amounts of the discrete pixels in the group, integrating the stored value pairs on a control device, and retrieving the stored value pairs. Control signals are generated for each pixel in the pixel headlight according to a sequence of images that contains pixel-value pairs corresponding to appropriate control((Johannes, ¶25) " The pixel spotlight is a spotlight that has a plurality of matrix-like arranged, individually controllable pixels that can be controlled in a suitable manner to adjust the light output of the pixel spotlight according to the current image of the image sequence in accordance with the control signal. A pixel of the pixel spotlight therefore preferably represents an essentially point-shaped light source. The light source can be, for example, a light-emitting diode, but also, in principle, a gas discharge lamp, an incandescent lamp and/or the like. These light sources can be combined into a matrix, which may also include a headlight control system by means of which the individual light sources can be controlled in a corresponding manner according to the control signal. The pixel spotlight can also include a laser light source similar to a laser scanner, which is controlled accordingly to provide a light distribution in accordance with the control signal. "). The control signal is obtained by the control signal unit ((Johannes, ¶28) "The control signal itself can be provided by a control signal unit and is preferably an electrical signal, which is particularly designed as a digital signal. The control signal represents a sequence of images in the manner of a video sequence, which serves to control the pixel spotlight accordingly. The pixel spotlight preferably uses a specific image from the image sequence, which is currently provided by means of the control signal, to control the light sources of the pixel spotlight. The pixel spotlight is therefore designed to emit light in the manner of a projector, in particular a video projector."); ((Johannes, ¶49) "However, it can also be provided that the reference control signal is supplied to the control signal unit, which then inserts a corresponding reference image into the image sequence or replaces an existing image with the reference image. Similarly, this function can of course also be provided in the pixel headlight itself.") The proposed combination in further view of Johannes discloses the control signal being provided from a control unit and discloses the determination of the control signal being based off of a comparison in the database but does not particularly disclose the control signal itself being stored. However, the proposed combination in further view of Waldner discloses actually storing value pairs from the group of pixels and respective change amounts of the discrete pixels in the group ((Waldner, Page 1313, Col 1, ¶2) "The luminous intensity values Iv,i are stored in an intensity matrix Iv 2 Rm_n. "). It would have been obvious to one of ordinary skill to which said subject matter pertains at the time the invention was filed to have further modified the proposed combination because combining prior art elements according to known methods would yield predictable results. By pre-storing control signal patterns (as in Waldner) on a control device instead of determining patterns in real time (as in Johannes), one having skill in the art could reasonably expect reduced computation requirements by the control unit and benefits of re-usability for particular controls instead of dynamically calculating for every scenario. Regarding claim 10, the proposed combination discloses The method according to claim 1, as stated previously. The proposed combination in further view of Johannes discloses wherein the illumination rule is determined by a desired two-dimensional distribution of the illuminance, which is dependent on a desired light function, in particular glare-free high beam and/or projection of lines and/or symbols onto the road. Given light distributions correspond to high beam and de-glaring ((Johannes, ¶62) "The camera data 18 are evaluated by means of an evaluation unit which is also not shown, taking into account a given light distribution, here a high beam, in order to determine and provide the control signal for controlling the pixel headlight 10 , 12."); ((Johannes, ¶68) "Further evaluation of the camera data 18, in particular taking into account area 26 ( Fig. 4) areas 52 are also hidden or de-glared where reduced lighting is desired to reduce glare due to ice or water."); ((Johannes, ¶31) "The specified light distribution can be provided by a higher-level vehicle control system, a control element that can be operated manually by the driver of the motor vehicle, and/or the like. For example, the specified light distribution can represent high beam, low beam and/or the like."). Side stripe projection or symbols may be a desired illumination pattern ((Johannes, ¶84) "Illustrations 74 and 80 refer to crossing the determined arrow 66 by means of a projection provided by the pixel spotlight according to an image of the image sequence 14 as light emission. The image shown is the one determined by means of the control signal unit, taking into account the evaluation of the camera data 18. The illuminated roadway is shown at 80. It can be seen that arrow 66 is not illuminated. In contrast, a cross 68 is projected brightly. The rest of the area of image 74 is illuminated according to a normal, predetermined light distribution. Figure 80 shows the projection onto the roadway. The cross 68 is easily recognizable visually. Arrow 66, on the other hand, is barely visible."); ((Johannes, ¶87) "The evaluation shows that 92 side stripes are displayed, however parts of the side stripes are missing. By evaluating the camera data 86, an image 88 is determined, which serves to control the pixel spotlight 10 , 12. The already identified markings 92 are bright enough and do not need further illumination, whereas the rest of the roadway is illuminated with average light intensity. Areas with the missing side stripes 94 are irradiated to the maximum extent. The road surface has a continuous pattern of shoulders, as shown with 90."). The light distributions correspond to an image of the sequence of images, which are two-dimensional representations of how illumination is to be distributed ((Johannes, ¶61) "Each individual image 20, 22 ( Fig. 5) of an image sequence 14 corresponds to a respective light distribution to be provided by the pixel spotlight 10 , 12. ") Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Johannes in view of Waldner as applied to claim 1 above, and further in view of Stein (DE102018007662A1), hereinafter referred to as Stein. Regarding claim 2, the proposed combination discloses The method according to claim 1, as stated previously. The proposed combination in further view of Johannes discloses (except the limitations surrounded by brackets ([[..]])) wherein the value pairs from the group of pixels and the respective change amounts [[are supplied as training data for a neural network. ]] Control signals are generated for each pixel in the pixel headlight according to a sequence of images that contains pixel-value pairs corresponding to appropriate control ((Johannes, ¶25) " The pixel spotlight is a spotlight that has a plurality of matrix-like arranged, individually controllable pixels that can be controlled in a suitable manner to adjust the light output of the pixel spotlight according to the current image of the image sequence in accordance with the control signal. A pixel of the pixel spotlight therefore preferably represents an essentially point-shaped light source. The light source can be, for example, a light-emitting diode, but also, in principle, a gas discharge lamp, an incandescent lamp and/or the like. These light sources can be combined into a matrix, which may also include a headlight control system by means of which the individual light sources can be controlled in a corresponding manner according to the control signal. The pixel spotlight can also include a laser light source similar to a laser scanner, which is controlled accordingly to provide a light distribution in accordance with the control signal. "). The proposed combination in further view of Johannes does not disclose using the pixel-value pairs for training a neural network. However, Stein discloses supplying data to train a neural network for a similar image processing application are supplied as training data for a neural network. Training data is supplied to a neural network in an image processing system that is leveraged for projected light patterns from a vehicle ((Stein, ¶10) "According to a very advantageous further development of the idea, it can be provided that artifacts can be compensated for and/or calculated using learned data. For example, a neural network with trained icons can be used to filter out and/or mark the artifacts. In particular, the neural network can calculate out the projection based on the known semantics. "); ((Stein, ¶19) "In the presentation of the Fig. 1 A vehicle 7 with an image acquisition unit 8, for example a camera, and a projection unit 9 can be seen. In an illuminated detection area 3, a projection 5 is shown, which in this embodiment represents a construction site sign. The light pattern 4 of the projection 5 is recognized as artifact 1 during image recognition and can therefore be compensated for and/or ignored during evaluation of other functions of the vehicle 7. Compensation can be achieved, for example, through learned data using a neural network. "); ((Stein, ¶15) "According to a beneficial further training, an image processing system can be trained using data generated by new projections. This allows image processing systems, which have been trained with pedestrian data, for example, to be supplemented by data generated through new projections. For example, if the system was only trained on complete views of pedestrians, these systems can also be trained on partial views of pedestrians in order to reliably recognize such views as pedestrians in the future. Semantic interpretations of the image information of new symbols are preferred. The user is trained to recognize and interpret the displayed information correctly."). Stein is analogous to the claimed invention because it is reasonably pertinent to the problem faced by the inventor. The claimed invention leverages image processing techniques for determining optimal control of pixel headlight systems used on a vehicle in a dynamic environment. Stein discloses the utilization of image processing techniques that take into consideration illuminated areas, illuminated by a car headlight, in a dynamic environment for evaluation ((Stein, ¶9) "Preferably, at least one projected light pattern can be recognized as an artifact during image recognition and compensated for and/or ignored. It is advantageous if the image recognition processing system knows where and/or how the light signals are brought into the detection area. In particular, information regarding the projected light patterns is provided to the image processing module via a compensation module in order to ignore and/or compensate for the corresponding areas in the image. "). It would have been obvious to one of ordinary skill to which said subject matter pertains at the time the invention was filed to have further modified the proposed combination to incorporate using the value-pair data of Johannes as training data for a neural network because some teaching, suggestion or motivation would have led one having skill in the art to do so in order to arrive at the claimed invention. Stein suggests that using training data as supplementary data for image processing systems in conjunction with new data may enable image processing systems to be more reliable, correct, and robust ((Stein, ¶15) "This allows image processing systems, which have been trained with pedestrian data, for example, to be supplemented by data generated through new projections. For example, if the system was only trained on complete views of pedestrians, these systems can also be trained on partial views of pedestrians in order to reliably recognize such views as pedestrians in the future. Semantic interpretations of the image information of new symbols are preferred. The user is trained to recognize and interpret the displayed information correctly."). Accordingly, to achieve these benefits, the combination would have been obvious. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 20220161713 A1 discloses a headlight control system for a vehicle that includes training a machine learning model for the control system. Any inquiry concerning this communication or earlier communications from the examiner should be directed to EMILY GORMAN LEATHERS whose telephone number is (571)272-1880. The examiner can normally be reached Monday-Friday, 9:00 am-5:00 pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, EMERSON PUENTE can be reached at (571) 272-3652. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /E.G.L./Examiner, Art Unit 2187 /EMERSON C PUENTE/Supervisory Patent Examiner, Art Unit 2187
Read full office action

Prosecution Timeline

Oct 24, 2022
Application Filed
Feb 20, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12536457
PARALLEL QUANTUM EXECUTION
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
75%
Grant Probability
99%
With Interview (+33.3%)
4y 0m
Median Time to Grant
Low
PTA Risk
Based on 4 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month