Prosecution Insights
Last updated: April 19, 2026
Application No. 18/540,099

METHOD FOR OPERATING A DATA PROCESSING SYSTEM

Non-Final OA §DP
Filed
Dec 14, 2023
Examiner
RIGOL, YAIMA
Art Unit
2135
Tech Center
2100 — Computer Architecture & Software
Assignee
Robert Bosch GmbH
OA Round
1 (Non-Final)
75%
Grant Probability
Favorable
1-2
OA Rounds
3y 2m
To Grant
92%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
464 granted / 619 resolved
+20.0% vs TC avg
Strong +18% interview lift
Without
With
+17.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
18 currently pending
Career history
637
Total Applications
across all art units

Statute-Specific Performance

§101
5.5%
-34.5% vs TC avg
§103
54.0%
+14.0% vs TC avg
§102
9.2%
-30.8% vs TC avg
§112
17.5%
-22.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 619 resolved cases

Office Action

§DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION As per the instant application having Application No. 18/540,099, the preliminary amendment filed on 2/22/2024 is herein acknowledged. Claims 1-16 have been canceled and claims 17-31 have been added. Claims 17-31 are pending. The specification has not been checked to the extent necessary to determine the presence of all possible minor errors. In the response to this Office action, the Examiner respectfully requests that support be shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line numbers in the specification and/or drawing figure(s). This will assist the Examiner in prosecuting this application. Examiner cites particular columns and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. INFORMATION CONCERNING DRAWINGS The applicant’s drawings submitted are acceptable for examination purposes. STATUS OF CLAIM FOR PRIORITY IN THE APPLICATION The instant application No. 18540099 filed 12/14/2023 claims foreign priority to 10 2022 214 053.2, filed 12/20/2022. ACKNOWLEDGEMENT OF REFERENCES CITED BY APPLICANT As required by M.P.E.P. 609(C), the applicant’s submission of the Information Disclosure Statement(s) dated 7/11/2024 is/are acknowledged by the examiner and the cited references have been considered in the examination of the claims now pending. As required by M.P.E.P 609 C(2), a copy (copies) of the PTOL-1449(s) initialed and dated by the examiner is/are attached to the instant office action. OBJECTIONS Claim Objections Claims 17, 20, 23-24, 26-27 and 29-31 are objected to because of the following informalities: The limitation “processing tasks exeecuting” in claim 17, line 4 of section “b)” contains a typographical error and should be corrected to read “processing tasks executing”. The limitation “data processin task” in claim 20, line 5 contains a typographical error and should be corrected to read “data processing task”. The limitations “on a second data processing processing unit” in lines 2-3 of claim 23 should be corrected to remove the second instance of the word “processing” and read as “on a second data processing unit”. The limitations “indivual data processing task” in line 2 of claim 24 contain a typographical error and should be corrected to read “individual data processing task”. The limitations “atleast one first data processing” in claim 26, lines 1-2 should be corrected to separate the words “at” and “least” and read as “at least one first data processing”. The limitation “invidual” in line 2 of claim 27 contains a typographical error and should be corrected to read “individual”. As per claim 29, line 2, the limitations “processing system s operated” should be corrected to read “processing system is operated”. As per claim 30, the limitation “systemfor” in line 5 should be amended to separate the words “system” and “for”. The limitation “indivual” in section “b)” should be corrected to read “individual”. There is a space between the word “in” and the ending period of claim 30. It appears this space is a typographical error and should be deleted so that the period at the end of the claim appears immediately after the word in and reads “in.”. The word “indiviual" in line 3 of claim 31 hould be corrected to read “individual”. The word “exeecuting" in section “b)” of claim 31 should be corrected to read “executing”. The limitation “aa)” in claim 31 appears to be a typographical error and should be corrected to read “a)”. Appropriate correction is required. Note dependent claims 18-29 are objected also for inheritin the deficiencies in the independent claim upon which they depend. REJECTIONS BASED ON PRIOR ART Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory obviousness-type double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the conflicting application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. Effective January 1, 1994, a registered attorney or agent of record may sign a terminal disclaimer. A terminal disclaimer signed by the assignee must fully comply with 37 CFR 3.73(b). Note that (MPEP 804.0 (I.B.1)) states: A complete response to a nonstatutory double patenting (NDP) rejection is either a reply by applicant showing that the claims subject to the rejection are patentably distinct from the reference claims or the filing of a terminal disclaimer in accordance with 37 CFR 1.321 in the pending application(s) with a reply to the Office action (see MPEP § 1490 for a discussion of terminal disclaimers). Such a response is required even when the nonstatutory double patenting rejection is provisional. As filing a terminal disclaimer, or filing a showing that the claims subject to the rejection are patentably distinct from the reference application’s claims, is necessary for further consideration of the rejection of the claims, such a filing should not be held in abeyance. Only objections or requirements as to form not necessary for further consideration of the claims may be held in abeyance until allowable subject matter is indicated. Therefore, an application must not be allowed unless the required compliant terminal disclaimer(s) is/are filed and/or the withdrawal of the nonstatutory double patenting rejection(s) is made of record by the examiner. See MPEP § 804.02, subsection VI, for filing terminal disclaimers required to overcome nonstatutory double patenting rejections in applications filed on or after June 8, 1995. Claims 17, 26-28 and 30-31 rejected on the ground of nonstatutory obviousness-type double patenting as being unpatentable over claims 1-12 of US 12487632 in view of Huetter (US 2010/0122045). Although the conflicting claims are not identical, they are not patentably distinct from each other because the claims in the patent application disclose/obviate the subject matter of the claims in the instant application. Claims of the instant application are compared to claims of the patent in the following table: Instant Application US 12487632 (Corresponding to Application No. 18/536,499) 17. (New) A method for operating a data processing system for processing data, wherein the data processing system is set up for repeated execution of a plurality of individual data processing tasks, wherein: a time grid with clock pulses is provided for execution of the individual data processing tasks, a predetermined respective repetition rate is specified for each of the individual data processing tasks, wherein the respective repetition rates each define a repetition clock pulse which corresponds in each case to an integer number of clock pulses of the grid, the repetition clock pulse of a data processing task of the individual data processing tasks with a highest respective repetition rate corresponds to the clock pulses of the time grid, the individual data processing tasks build on one another, so that at least one of the data processing tasks processes output data of a further one of the data processing tasks as input data, a number of buffer memories are provided, which are assigned to the clock pulses of the time grid and are available in turn, so that output data generated during a relevant clock pulse of the clock pulses of the grid are written to the assigned buffer memory, and output data generated during previous clock pulses are still available in other buffer memories for a number of clock pulses, the data processing system is operated on a data processing device including at least one first data processing unit and including at least one second data processing unit, which in each case include processors and memory modules, wherein data transmission interfaces exist for data transmission between the first and second data processing units, wherein each of the individual data processing tasks is associated with at least one first data processing unit of the at least one first data processing unit or at least one second data processing unit of the at least one second data processing unit, and memory areas of the buffer memories are made available to the memory modules of the at least one first data processing unit and the at least one second data processing unit; wherein the following steps are carried out for the operation of the data processing system: a) executing each of the individual data processing tasks at its respective repetition rate in the time grid on one of the first and second data processing units of the data processing system; b) outputting output data by the individual data processing tasks into respectively available memory areas of the buffer memory which is assigned to the clock pulse of the grid, wherein output data generated by those of the individual data processing tasks exeecuting on second data processing units of the at least one second data processing unit are transmitted via the data transmission interfaces after output into memory areas of the buffer memory of first data processing units of the at least one first data processing unit; and c) reading in input data by the individual data processing tasks from respectively available memory areas of the buffer memory, which are associated with preceding clock pulses of the grid, wherein those of the input data, which are required by those of the individual data processing tasks executing on second data processing units of the at least one second data processing unit, are transmitted via the data transmission interfaces into memory areas of the buffer memory on second data processing units of the at least one second data processing unit before being read in. 26. (New) The method according to claim 17, wherein first data processing units of the atleast one first data processing unit monitor that second data processing units of the at least one second data processing terminate associated data processing units such that the output data are available for data processing of preceding clock pulses in each clock pulse. 27. (New) The method according to claim 17, wherein, for step a), a controller of the data processing system, which controller is higher level relative to the invidual data processing tasks, determines on which data processing unit of the at least one first and second data processing units an individual data processing task of the individual data processing tasks is executed. 28. (New) The method according to claim 17, wherein, for steps b) and c), a controller that is higher level relative to the individual data processing tasks determines on which memory area of a buffer memory of the buffer memories particular data processing tasks of the individual processing tasks will store their output data so as to be capable of being read in as input data by others of the individual data processing tasks. 30. (New) A data processing device, comprising: at least one first data processing unit optimized for error reduction and at least one second data processing unit optimized for performance, which each include one or more processors and one or more memory modules, wherein the data processing device is configured such that it can be operated as a data processing systemfor processing data, wherein the data processing system is set up for repeated execution of a plurality of individual data processing tasks, wherein: a time grid with clock pulses is provided for execution of the individual data processing tasks, a predetermined respective repetition rate is specified for each of the individual data processing tasks, wherein the respective repetition rates each define a repetition clock pulse which corresponds in each case to an integer number of clock pulses of the grid, the repetition clock pulse of a data processing task of the individual data processing tasks with a highest respective repetition rate corresponds to the clock pulses of the time grid, the individual data processing tasks build on one another, so that at least one of the data processing tasks processes output data of a further one of the data processing tasks as input data, a number of buffer memories are provided, which are assigned to the clock pulses of the time grid and are available in turn, so that output data generated during a relevant clock pulse of the clock pulses of the grid are written to the assigned buffer memory, and output data generated during previous clock pulses are still available in other buffer memories for a number of clock pulses, wherein data transmission interfaces exist for data transmission between the first and second data processing units, wherein each of the individual data processing tasks is associated with at least one first data processing unit of the at least one first data processing unit or at least one second data processing unit of the at least one second data processing unit, and memory areas of the buffer memories are made available to the memory modules of the at least one first data processing unit and the at least one second data processing unit; wherein, for the operation of the data processing system: a) each of the individual data processing tasks is executed at its respective repetition rate in the time grid on one of the first and second data processing units of the data processing system; b) output data is output by the individual data processing tasks into respectively available memory areas of the buffer memory which is assigned to the clock pulse of the grid, wherein output data generated by those of the indivual data processing tasks executing on second data processing units of the at least one second data processing unit are transmitted via the data transmission interfaces after output into memory areas of the buffer memory of first data processing units of the at least one first data processing unit; and c) input data is read in by the individual data processing tasks from respectively available memory areas of the buffer memory, which are associated with preceding clock pulses of the grid, wherein those of the input data, which are required by those of the individual data processing tasks executing on second data processing units of the at least one second data processing unit, are transmitted via the data transmission interfaces into memory areas of the buffer memory on second data processing units of the at least one second data processing unit before being read in . 31. (New) A non-transitory computer-readable storage medium on which are stored commands for operating a data processing system for processing data, wherein the data processing system is set up for repeated execution of a plurality of indiviual data processing tasks, wherein: a time grid with clock pulses is provided for execution of the individual data processing tasks, a predetermined respective repetition rate is specified for each of the individual data processing tasks, wherein the respective repetition rates each define a repetition clock pulse which corresponds in each case to an integer number of clock pulses of the grid, the repetition clock pulse of a data processing task of the individual data processing tasks with a highest respective repetition rate corresponds to the clock pulses of the time grid, the individual data processing tasks build on one another, so that at least one of the data processing tasks processes output data of a further one of the data processing tasks as input data, a number of buffer memories are provided, which are assigned to the clock pulses of the time grid and are available in turn, so that output data generated during a relevant clock pulse of the clock pulses of the grid are written to the assigned buffer memory, and output data generated during previous clock pulses are still available in other buffer memories for a number of clock pulses, the data processing system is operated on a data processing device including at least one first data processing unit and including at least one second data processing unit, which in each case include processors and memory modules, wherein data transmission interfaces exist for data transmission between the first and second data processing units, wherein each of the individual data processing tasks is associated with at least one first data processing unit of the at least one first data processing unit or at least one second data processing unit of the at least one second data processing unit, and memory areas of the buffer memories are made available to the memory modules of the at least one first data processing unit and the at least one second data processing unit; wherein the commands, when executed by a computer, cause the computer to perform the following steps for the operation of the data processing system: aa) executing each of the individual data processing tasks at its respective repetition rate in the time grid on one of the first and second data processing units of the data processing system; b) outputting output data by the individual data processing tasks into respectively available memory areas of the buffer memory which is assigned to the clock pulse of the grid, wherein output data generated by those of the individual data processing tasks exeecuting on second data processing units of the at least one second data processing unit are transmitted via the data transmission interfaces after output into memory areas of the buffer memory of first data processing units of the at least one first data processing unit; and c) reading in input data by the individual data processing tasks from respectively available memory areas of the buffer memory, which are associated with preceding clock pulses of the grid, wherein those of the input data, which are required by those of the individual data processing tasks executing on second data processing units of the at least one second data processing unit, are transmitted via the data transmission interfaces into memory areas of the buffer memory on second data processing units of the at least one second data processing unit before being read in. 1. A method for operating a data processing system for processing data, wherein the data processing system is set up for repeated execution of a plurality of data processing tasks, wherein a time grid with clock pulses is provided for execution of individual data processing tasks of the plurality of data processing tasks, a predetermined repetition rate is specified for each of the individual data processing tasks, wherein the predetermined repetition rates each define a repetition clock pulse which corresponds in each case to an integer number of clock pulses of the time grid, the repetition clock pulse of the data processing task of the individual data processing tasks with a highest repetition rate corresponds to the clock pulses of the time grid, wherein the individual data processing tasks build on one another, so that at least one of the individual data processing tasks processes output data of a further data processing task of the individual data processing tasks as input data, wherein a number of buffer memories are provided, which are assigned to the clock pulses of the time grid and are available in turn, so that output data generated during a relevant clock pulse are written to a relevant buffer memory and output data generated during previous clock pulses for a number of clock pulses are still available in other buffer memories of the buffer memories, wherein the data processing system is operated on a data processing device including at least two data processing units with processors and memory modules, wherein the individual data processing tasks are assigned to at least one of the data processing units, and memory areas of the buffer memories are made available on the memory modules of the assigned data processing units, see plurality of buffer memories for a plurality of processing units below; also see claim 8 wherein the following steps are carried out for the operation of the data processing system: a) executing a synchronization function in each clock pulse before the start of a relevant data processing task in order to achieve synchronization of the buffer memories for a plurality of the data processing units; b) executing individual data processing tasks of the plurality of data processing tasks at their specified repetition rate in the time grid on one of the data processing units of the data processing system; c) outputting output data by the individual data processing tasks into respectively provided memory areas of the buffer memory assigned to the clock pulse of the time grid; and d) reading in input data by the individual data processing tasks from respectively provided memory areas of the buffer memory which are assigned to preceding clock pulses of the time grid. 2. The method according to claim 1, wherein the output data of the individual data processing tasks are further processed as input data of further data processing tasks without a copying operation. 3. The method according to claim 1, wherein the synchronization function in step a) enables external memory accesses by data processing units of the at least two processing units to memory modules of other data processing units of the at least two processing units, in which it is ensured that all of the output data of previously executed data processing tasks are available. 4. The method according to claim 1, wherein at least one cache memory in at least one of the data processing units is emptied by the synchronization function in step a) and data contained in the at least one cache memory are stored on a memory module of the data processing unit in such a way that external memory accesses by other data processing units of the at least two processing units to the data stored on the memory module are enabled. 5. The method according to claim 1, wherein the synchronization function is in each case executed in advance of the data processing task with the highest repetition rate. 6. The method according to claim 1, wherein the synchronization function is executed for each of the data processing units. 7. The method according to claim 1, wherein the synchronization function has an execution priority corresponding to a execution priority of the data processing task with a highest priority. 8. The method according to claim 1, wherein the data processing system has a communication memory, wherein information is stored in the communication memory as to which output data are stored in which memory areas of the buffer memories. 9. The method according to claim 1, wherein for step b) a controller of the data processing system, which controller is higher-level relative to the individual data processing tasks, determines on which data processing unit of the at least two data processing units each of the individual data processing tasks is executed. 10. The method according to claim 1, wherein for step c) and d), a controller that is higher-level relative to the data processing tasks determines on which memory area of a buffer memory particular data processing tasks will store their output data so as to be capable of being read in as input data by other data processing tasks. 11. A data processing device, comprising: at least two data processing units which each have one or more processors and one or more memory modules, wherein the data processing device is configured such that it can be operated as a data processing system, the data processing system being set up for repeated execution of a plurality of data processing tasks, wherein a time grid with clock pulses is provided for execution of individual data processing tasks of the plurality of data processing tasks, a predetermined repetition rate is specified for each of the individual data processing tasks, wherein the predetermined repetition rates each define a repetition clock pulse which corresponds in each case to an integer number of clock pulses of the time grid, the repetition clock pulse of the data processing task of the individual data processing tasks with a highest repetition rate corresponds to the clock pulses of the time grid, wherein the individual data processing tasks build on one another, so that at least one of the individual data processing tasks processes output data of a further data processing task of the individual data processing tasks as input data, wherein a number of buffer memories are provided, which are assigned to the clock pulses of the time grid and are available in turn, so that output data generated during a relevant clock pulse are written to a relevant buffer memory and output data generated during previous clock pulses for a number of clock pulses are still available in other buffer memories of the buffer memories, wherein the individual data processing tasks are assigned to at least one of the data processing units, and memory areas of the buffer memories are made available on the memory modules of the assigned data processing units, wherein the data processing system is configured to: a) execute a synchronization function in each clock pulse before the start of a relevant data processing task in order to achieve synchronization of the buffer memories for a plurality of the data processing units; b) execute individual data processing tasks of the plurality of data processing tasks at their specified repetition rate in the time grid on one of the data processing units of the data processing system; c) output output data by the individual data processing tasks into respectively provided memory areas of the buffer memory assigned to the clock pulse of the time grid; and d) read in input data by the individual data processing tasks from respectively provided memory areas of the buffer memory which are assigned to preceding clock pulses of the time grid. 12. A non-transitory computer-readable storage medium on which are stored commands for operating a data processing system for processing data, wherein the data processing system is set up for repeated execution of a plurality of data processing tasks, wherein a time grid with clock pulses is provided for execution of individual data processing tasks of the plurality of data processing tasks, a predetermined repetition rate is specified for each of the individual data processing tasks, wherein the predetermined repetition rates each define a repetition clock pulse which corresponds in each case to an integer number of clock pulses of the time grid, the repetition clock pulse of the data processing task of the individual data processing tasks with a highest repetition rate corresponds to the clock pulses of the time grid, wherein the individual data processing tasks build on one another, so that at least one of the individual data processing tasks processes output data of a further data processing task of the individual data processing tasks as input data, wherein a number of buffer memories are provided, which are assigned to the clock pulses of the time grid and are available in turn, so that output data generated during a relevant clock pulse are written to a relevant buffer memory and output data generated during previous clock pulses for a number of clock pulses are still available in other buffer memories of the buffer memories, wherein the data processing system is operated on a data processing device including at least two data processing units with processors and memory modules, wherein the individual data processing tasks are assigned to at least one of the data processing units, and memory areas of the buffer memories are made available on the memory modules of the assigned data processing units, wherein the commands, when executed by a computer, causing the computer to perform the following steps: a) executing a synchronization function in each clock pulse before the start of a relevant data processing task in order to achieve synchronization of the buffer memories for a plurality of the data processing units; b) executing individual data processing tasks of the plurality of data processing tasks at their specified repetition rate in the time grid on one of the data processing units of the data processing system; c) outputting output data by the individual data processing tasks into respectively provided memory areas of the buffer memory assigned to the clock pulse of the time grid; and d) reading in input data by the individual data processing tasks from respectively provided memory areas of the buffer memory which are assigned to preceding clock pulses of the time grid. Regarding claims 17, 30 and 31, the patent does not expressly refer to the limitations “wherein output data generated by those of the individual data processing tasks exeecuting on second data processing units of the at least one second data processing unit are transmitted via the data transmission interfaces after output into memory areas of the buffer memory of first data processing units of the at least one first data processing unit… wherein those of the input data, which are required by those of the individual data processing tasks executing on second data processing units of the at least one second data processing unit, are transmitted via the data transmission interfaces into memory areas of the buffer memory on second data processing units of the at least one second data processing unit before being read in.”; however, regarding these limitations, Huetter teaches “[0011] In a method for processing data, the invention provides for an algorithm which does not have to randomly access data which have already been previously processed to be used during the step of processing a data block. With such an algorithm, there is thus no need to keep available a respective memory area for the output data and the result, that is to say data which have already been used in the algorithm can be overwritten with the result data in the memory. If the algorithm requires deterministic access to individual data which have previously been processed (for example in the case of a digital horizontal filter), these values may be buffered and are therefore still available to the algorithm despite the original data having been overwritten. [0012] In a method for processing data from a first data processing system in a second data processing system, the invention provides for data to be written to a memory area of the second data processing system in a first interval of time. The data are processed in the same memory area of the second data processing system in a second interval of time. The data are returned to the first data processing system from this memory area of the second data processing system in a third interval of time. The use of the method according to the invention to process data in a second data processing system, with the data coming from a first data processing system and the result data being returned to the latter again, uses the advantage of the proposed method and enables bidirectional data communication between the first data processing system and the second data processing system with a reduced memory requirement in the second data processing system.” Where the transferring of data between the processing units corresponds to transfer interfaces being used for the transfers as taught by Huetter. In view of Huetter, one of ordinary skill in the art would have found it obvious to modify the patent to include “wherein output data generated by those of the individual data processing tasks exeecuting on second data processing units of the at least one second data processing unit are transmitted via the data transmission interfaces after output into memory areas of the buffer memory of first data processing units of the at least one first data processing unit… wherein those of the input data, which are required by those of the individual data processing tasks executing on second data processing units of the at least one second data processing unit, are transmitted via the data transmission interfaces into memory areas of the buffer memory on second data processing units of the at least one second data processing unit before being read in.” since doing so would provide the benefits of “[0012]… The use of the method according to the invention to process data in a second data processing system, with the data coming from a first data processing system and the result data being returned to the latter again, uses the advantage of the proposed method and enables bidirectional data communication between the first data processing system and the second data processing system with a reduced memory requirement in the second data processing system.” Claims 17 and 26 rejected on the ground of nonstatutory obviousness-type double patenting as being unpatentable over claims 13-23 of co-pending U.S. Application No. 18/535,308 (now patented but not yet published). Although the conflicting claims are not identical, they are not patentably distinct from each other because the claims in the co-pending application disclose/obviate the subject matter of the claims in the instant application. Claims of the instant application are compared to claims of co-pending application in the following table: Instant Application Application No. 18/535,308 (now patented but not yet published) 17. (New) A method for operating a data processing system for processing data, wherein the data processing system is set up for repeated execution of a plurality of individual data processing tasks, wherein: a time grid with clock pulses is provided for execution of the individual data processing tasks, a predetermined respective repetition rate is specified for each of the individual data processing tasks, wherein the respective repetition rates each define a repetition clock pulse which corresponds in each case to an integer number of clock pulses of the grid, the repetition clock pulse of a data processing task of the individual data processing tasks with a highest respective repetition rate corresponds to the clock pulses of the time grid, the individual data processing tasks build on one another, so that at least one of the data processing tasks processes output data of a further one of the data processing tasks as input data, a number of buffer memories are provided, which are assigned to the clock pulses of the time grid and are available in turn, so that output data generated during a relevant clock pulse of the clock pulses of the grid are written to the assigned buffer memory, and output data generated during previous clock pulses are still available in other buffer memories for a number of clock pulses, the data processing system is operated on a data processing device including at least one first data processing unit and including at least one second data processing unit, which in each case include processors and memory modules, wherein data transmission interfaces exist for data transmission between the first and second data processing units, wherein each of the individual data processing tasks is associated with at least one first data processing unit of the at least one first data processing unit or at least one second data processing unit of the at least one second data processing unit, and memory areas of the buffer memories are made available to the memory modules of the at least one first data processing unit and the at least one second data processing unit; wherein the following steps are carried out for the operation of the data processing system: a) executing each of the individual data processing tasks at its respective repetition rate in the time grid on one of the first and second data processing units of the data processing system; b) outputting output data by the individual data processing tasks into respectively available memory areas of the buffer memory which is assigned to the clock pulse of the grid, wherein output data generated by those of the individual data processing tasks exeecuting on second data processing units of the at least one second data processing unit are transmitted via the data transmission interfaces after output into memory areas of the buffer memory of first data processing units of the at least one first data processing unit; and c) reading in input data by the individual data processing tasks from respectively available memory areas of the buffer memory, which are associated with preceding clock pulses of the grid, wherein those of the input data, which are required by those of the individual data processing tasks executing on second data processing units of the at least one second data processing unit, are transmitted via the data transmission interfaces into memory areas of the buffer memory on second data processing units of the at least one second data processing unit before being read in. 26. (New) The method according to claim 17, wherein first data processing units of the atleast one first data processing unit monitor that second data processing units of the at least one second data processing terminate associated data processing units such that the output data are available for data processing of preceding clock pulses in each clock pulse. 13. A method for operating a data processing system for processing data, wherein: (i) the data processing system is set up for repeated execution of a plurality of individual data processing tasks, (ii) a time grid with a base clock pulse is provided for the execution of the individual data processing tasks (iii) a respective predetermined repetition rate is specified for each of the individual data processing tasks, iv) each of the respective repetition rates defines a repetition clock pulse which corresponds in each case to a respective integer number of instances of the base clock pulse of the grid, (v) one of the repetition clock pulses, which corresponds to one of the individual data processing tasks that has a highest repetition rate of all of the plurality of individual data processing tasks, is equal to the base clock pulse of the time grid, (vi) the individual data processing tasks build on one another so that at least one of the data processing tasks processes output data of a further one of the data processing tasks as input data, and (vii) a number of buffer memories are provided, which are assigned to respective instances of the base clock pulse of the time grid and which are available in turn, so that output data generated during a respective instance of the base clock pulse are written to a respective buffer memory of the buffer memories and output data generated during previous instances of the base clock pulse continue to be available in others of the buffer memories for a number of instances of the base clock pulse; and See claims 14, 16 and 18-20 below Note, that the output of a processing device to serve as input of the other based on data stored in the buffer memories as set forth in claim 14 corresponds to transfer interfaces between the processing units wherein the method comprises carrying out the following steps for the operation of the data processing system: a) executing each of the data processing tasks at its respective repetition rate in the time grid; b) outputting respective portions of the output data by one or more of the individual data processing tasks, each into the respective one of the buffer memories assigned to the respective instance of the base clock pulse of the grid in which the respective individual data processing task is executed; and see claims 18-19 below c) reading in respective portions of the input data by one or more of the individual data processing tasks from the buffer memories which are assigned to the preceding instances of the base clock pulse of the grid. See claims 14, 18-19 below Note, that the output of a processing device to serve as input of the other based on data stored in the buffer memories as set forth in claim 14 corresponds to transfer interfaces between the processing units 14. The method according to claim 13, wherein at least a portion of the output data generated by a first subset of the data processing tasks is further processed as a portion of the input data used by a further subset of the data processing tasks without a copying operation. 15. The method according to claim 13, wherein messages between the data processing tasks are exchanged only via the buffer memories, so that communication between the data processing tasks takes place only via the buffer memories. 16. The method according to claim 13 for each instance of the base clock pulse at which one or more of the data processing tasks are intended for execution, the one or more of the data processing tasks intended for execution at the respective instance are activated at a start time of the respective instance of the base clock pulse and for each instance of the base clock pulse at which at least two of the data processing tasks are intended for execution, respective starts of execution of the at least two of the data processing tasks take place in an order that corresponds to their respective repetition rates so that, for each pair of the at least two data processing tasks that have different repetition rates, the respective execution start of whichever has a higher one of the repetition rates takes place temporally before the respective execution start of the other. 17. (Previously Presented) The method according to claim 13, wherein execution of those of the data processing tasks with a higher repetition rate are prioritized over execution of those of the data processing tasks with a lower repetition rate. 18. (Previously Presented) The method according to claim 13, wherein the buffer memories are structured in such a way that memory areas provided for specific output data from data processing tasks are provided within the buffer memories. 19. The method according to claim 18, wherein, for those of the data processing tasks that obtain input data from the buffer memories, it is specified from which memory areas of the buffer memories the input data are to be read. 20. The method according to claim 19, wherein selection and addressing of the buffer memories is calculated using associated task counters of those of the data processing tasks involved. 21. The method according to claim 13, wherein the number of buffer memories is such that all of the input data generated as output data by one or more of the data processing tasks during any of the instances of the base clock pulse and that also are respectively required for the execution, subsequently, of one or more other ones of the data processing tasks remain respectively available to those other data processing tasks via the buffer memories. The above rationale applied for claim 17 is incorporated for claims 30 and 31. RELEVANT ART CITED BY THE EXAMINER The following prior art made of record and not relied upon is cited to establish the level of skill in the applicant’s art and those arts considered reasonably pertinent to applicant’s disclosure. See MPEP 707.05(c). Alvarez Martinez et al. (US 2020/0110634) teaches “[0017] In a first aspect, a method is proposed of managing task dependencies at runtime in a parallel computing system of a hardware processing system. The parallel computing system may comprise a multi-core processor running runtime software, a hardware acceleration processor, a communication module, and a gateway. The method may comprise: initializing the parallel computing system; allocating data buffers in system memory for each thread running in each multi-core processing element; sending system memory address and length of buffers used in a communication to the hardware acceleration processor, using buffered and asynchronous communication; the hardware acceleration processor directly accessing the buffers bypassing the threads running in the parallel processing elements; the hardware acceleration processor reading a new task buffer and upon sensing a memory full condition or critical memory conflict in a dedicated local memory attached to the hardware acceleration processor, instructing the gateway to stop received new tasks; the hardware acceleration processor continues processing the dependencies of the last read task; and having finished dependency processing of the last read task, memory space is freed and processing continues with a next task.” Yin et al. (US 2017/0329631) teaches “An apparatus includes a programmable circuit that configures circuits for executing tasks. The apparatus estimates an execution time-period required for executing a first task by first circuits configured in the programmable circuit, and determines a configuration number indicating a number of second circuits that are to be configured, in the programmable circuit, for executing a second task to be executed after the first task, based on the execution time-period and a configuration time-period required for configuring the configuration number of the second circuits in the programmable circuit. The apparatus causes the programmable circuit to configure, during execution of the first task, the configuration number of the second circuits, and adjusts the configuration number, based on a relationship between a time at which the first task is completed and a time at which configuration of the configuration number of the second circuits in the programmable circuit is completed.” (Abstract). “[0093] In the actual operations, the time period for executing the task 1 is 7T that is initially estimated ((e) illustrated in FIG. 7). Upon the completion of the task 1, the CPU estimates that the amount of processing to be executed by the task 2 increases due to dependence between the task 1 and the task 2 and that the time period for executing the task 2 by a single circuit increases from 64T to 100T ((f) illustrated in FIG. 7). For example, in a case where the number of times processing is executed by the task 2 is determined based on processing executed by the task 1, the amount of the processing to be executed by the task 2 changes depending on the result of the processing executed by the task 1. The number of the times processing is executed by the task 2 is the number of times a loop described in the application program for executing the task 2 is executed or the like.” Yasue (US 20060179436) teaches “Methods and apparatus provide for executing one or more software programs within a plurality of processors of a multi-processing system in accordance with a data parallel processing model, the software programs being comprised of a number of processing tasks, each task executing instructions on one or more input data units to produce an output data unit, and each data unit containing one or more data objects; responding to one or more application programming interface codes to change from a current processing task to a subsequent processing task within a given one or more of the processors; and using the output data unit produced by the current processor task as an input data unit by the subsequent processing task to produce a further output data unit within the same processor.” (Abstract). Zisman et al. (US 12,175,285) teaches “An integrated circuit for distributing processing tasks includes a pre-selector circuit and a scheduler circuit. The pre-selector circuit is configured to receive a processing task, determine a category of the processing task, and select, from a set of task distribution techniques and based at least in part on the category of the processing task, a task distribution technique for distributing the processing task to a group of processing units. The scheduler circuit is configured to implement the selected task distribution technique to select, from the group of processing units, a target processing unit for performing the processing task.” (Abstract). Hsu et al. (US 12,159,057) teaches “Implementing data flows of an application across a memory hierarchy of a data processing array includes receiving a data flow graph specifying an application for execution on the data processing array. A plurality of buffer objects corresponding to a plurality of different levels of the memory hierarchy of the data processing array and an external memory are identified. The plurality of buffer objects specify data flows. Buffer object parameters are determined. The buffer object parameters define properties of the data flows. Data that configures the data processing array to implement the data flows among the plurality of different levels of the memory hierarchy and the external memory is generated based on the plurality of buffer objects and the buffer object parameters.” (Abstract). Huetter (US 2010/0122045) teaches “The present invention relates to a method for processing data. A data block to be processed is written to a memory area in a first interval of time. The data block is processed in the same memory area (A, B, C) in a second interval of time. The processed data block is returned from the same memory area in a third interval of time.” (Abstract). Segal et al. (US 2004/0066765) teaches “A novel data transfer scheme for efficiently transferring data between multiple data generating processing units in a processing element wherein each processing unit may generate data at different rates. The data output of each processing unit is multiplexed into a single data stream and written to a memory buffer. A centralized software processor such as a CPU or DSP implements a demultiplexer operative to read the contents of the input buffer, demultiplex the data and distribute it to individual unit buffers thus recreating the original data streams generating by each of the processing units. The multiplexed data stream is generated by partitioning the outputs of the data generating processing units into multiple multiplexer groups based on individual data rates. The outputs of the various groups are collected by a multiplexer and used to build a single data stream having a well-defined structure.” (Abstract). Chen (US 2020/0117505) teaches “A memory processor-based multiprocessing architecture and an operation method thereof are provided. The memory processor-based multiprocessing architecture includes a main processor and a plurality of memory chips. The memory chips include a plurality of processing units and a plurality of data storage areas. The processing units and the data storage areas are respectively disposed one-to-one in the memory chips. The data storage areas are configured to share a plurality of sub-datasets of a large dataset. The main processor assigns a computing task to one of the processing units of the memory chips, so that the one of the processing units accesses the corresponding data storage area to perform the computing task according to a part of the sub-datasets.” (Abstract). CLOSING COMMENTS a. STATUS OF CLAIMS IN THE APPLICATION a(1) CLAIMS REJECTED IN THE APPLICATION Per the instant office action, claims 17, 26-28, 30-31 have received a first action on the merits and are subject of a first action non-final. a(2) CLAIMS NO LONGER UNDER CONSIDERATION Claims 1-16 have been canceled. a(3) ALLOWABLE SUBJECT MATTER Per the instant office action, claims 17, 30 and 31 would be allowable if the objections and double patenting rejections above are overcome. As per claim 17 (same reasoning applies to claims 30 and 31), the prior art of record does not disclose or render obvious the recited combinations above as a whole, with the inclusion of the limitations of “… a time grid with clock pulses is provided for execution of the individual data processing tasks, a predetermined respective repetition rate is specified for each of the individual data processing tasks, wherein the respective repetition rates each define a repetition clock pulse which corresponds in each case to an integer number of clock pulses of the grid, the repetition clock pulse of a data processing task of the individual data processing tasks with a highest respective repetition rate corresponds to the clock pulses of the time grid, the individual data processing tasks build on one another, so that at least one of the data processing tasks processes output data of a further one of the data processing tasks as input data… a number of buffer memories are provided, which are assigned to the clock pulses of the time grid and are available in turn, so that output data generated during a relevant clock pulse of the clock pulses of the grid are written to the assigned buffer memory, and output data generated during previous clock pulses are still available in other buffer memories for a number of clock pulses,… a) executing each of the individual data processing tasks at its respective repetition rate in the time grid on one of the first and second data processing units of the data processing system; b) outputting output data by the individual data processing tasks into respectively available memory areas of the buffer memory which is assigned to the clock pulse of the grid, wherein output data generated by those of the individual data processing tasks exeecuting on second data processing units of the at least one second data processing unit are transmitted via the data transmission interfaces after output into memory areas of the buffer memory of first data processing units of the at least one first data processing unit; and c) reading in input data by the individual data processing tasks from respectively available memory areas of the buffer memory, which are associated with preceding clock pulses of the grid, wherein those of the input data, which are required by those of the individual data processing tasks executing on second data processing units of the at least one second data processing unit, are transmitted via the data transmission interfaces into memory areas of the buffer memory on second data processing units of the at least one second data processing unit before being read in.” Claims 18-20, 22-24 and 29 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form (and if the claims objections above are overcome) including all of the limitations of the base claim and any intervening claims. The prior art of record does not disclose or render obvious the recited combinations as recited in claims 18-20, 22-24 and 29 as a whole. Claims 21 and 25 are objected to by virtue of their dependence on objected claims 20 and 24 respectively. b. DIRECTION OF FUTURE CORRESPONDENCES Any inquiry concerning this communication or earlier communications from the examiner should be directed to YAIMA RIGOL whose telephone number is (571)272-1232. The examiner can normally be reached Monday-Friday 9:00AM-5:00PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jared I. Rutz can be reached on (571) 272-5535. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. March 24, 2026 /YAIMA RIGOL/ Primary Examiner, Art Unit 2135
Read full office action

Prosecution Timeline

Dec 14, 2023
Application Filed
Mar 24, 2026
Non-Final Rejection — §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591522
COMPUTER-READABLE RECORDING MEDIUM HAVING STORED THEREIN MEMORY ACCESS CONTROL PROGRAM, MEMORY ACCESS CONTROL METHOD, AND INFORMATION PROCESSING APPARATUS
2y 5m to grant Granted Mar 31, 2026
Patent 12585581
MEMORY MODULE HAVING VOLATILE AND NON-VOLATILE MEMORY SUBSYSTEMS AND METHOD OF OPERATION
2y 5m to grant Granted Mar 24, 2026
Patent 12579073
APPARATUS AND METHOD FOR INTELLIGENT MEMORY PAGE MANAGEMENT
2y 5m to grant Granted Mar 17, 2026
Patent 12578899
MEMORY DEVICE, MEMORY SYSTEM, MEMORY CONTROLLER, AND OPERATION METHOD
2y 5m to grant Granted Mar 17, 2026
Patent 12566716
SYSTEMS AND METHODS FOR TIMESTEP SHARED MEMORY MULTIPROCESSING BASED ON TRACKING TABLE MECHANISMS
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
75%
Grant Probability
92%
With Interview (+17.5%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 619 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month