DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim(s) 1-6 is/are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 1 (similarly claims 5 and 6) recite: “a model used by each task”, “a queue that stores a task for each model used by each task”, “read a task using a model”, “cause the accelerator to execute the task”.
The examiner is unclear how a model and each model, task and each task is distinguished for one another. The claim does not provide clear antecedent basis for a model/each model, task/each task.
Claim 1 (similarly claims 5 and 6) recite: “each task”, “each model”. The examiner is unclear of the intention of the term “each”. Although, by using the term “each”, it would normally intend more than one. However, the current claim does not require more than one. In addition, the later reference(s) to a model as “each model” (examiners speculation) introduces additional confusion since there is/are no clear antecedent basis.
Claim 2 recite: “a number of times of switching of the accelerator is reduced”. The examiner is unclear of the switching is referring to switching of the setting or actual switching of the accelerator.
Claim 4 recite: “each task”. The examiner is unclear which task, the “each task” is referring to. Since, claim 1 also recite “each task” and claim 4 also recites “one or more tasks” that is issued by each process”. The claim does not provide clear antecedent basis.
Claims 2-4 and 6 are rejected based on rejection of its corresponding dependent claim.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kitani et al. (Pub 20200143670) (hereafter Kitani) in view of Mannar (Pat 11526385).
As per claim 1, Kitani teaches:
A scheduling device, comprising:
a controller unit, implemented using one or more processors, configured to acquire a model used by each task; ([Paragraph 10], FIG. 3(a), FIG. 3(b), FIG. 3(c), FIG. 3(d), and FIG. 3(e) are status tables for use in AI model selection according to the first embodiment. [Paragraph 38], Since the AI model operation processing unit enabling option setting unit 2104 uses the AI model selected at the AI model selecting unit 2103, the AI model operation processing unit enabling option setting unit 2104 sets the accelerator 23 for enabling the combination pattern of the operation units.)
a control unit, implemented using the one or more processors, configured to perform control to switch a setting of an accelerator in such a manner that the model acquired by the controller unit becomes processable; and ([Paragraph 38], Since the AI model operation processing unit enabling option setting unit 2104 uses the AI model selected at the AI model selecting unit 2103, the AI model operation processing unit enabling option setting unit 2104 sets the accelerator 23 for enabling the combination pattern of the operation units. [Paragraph 42], The accelerator 23 includes hardware devices, such as an FPGA (Field-Programmable Gate Array), ASIC (Application Specific Integrated Circuit), and GPU (Graphics Processing Unit) configured to execute AI model operation processing at high speed. In the example shown in FIG. 2, the accelerator 23 includes an FPGA or ASIC, and the accelerator 23 is configured of the AI model operation processing unit 230 and AI model parameter information 231. [Paragraph 43], The AI model operation processing unit 230 executes AI model operation processing, and configured of one or more operation units 2300. The AI model parameter information 231 is parameter information for use in AI model operation processing, and indicates the coupling coefficient between the operation units u described in FIG. 1, for example. Note that the AI model parameter information 231 may be held in the inside or on the outside of the hardware device of the accelerator 23. In the case in which the AI model parameter information 231 on the outside of the device, the AI model parameter information 231 may be stored on the storage unit 22, or may be stored on another storage unit, not shown, connected to the accelerator 23. [Paragraph 52], The AI model may be switched for every some driving scenes from the driving scenes shown in FIG. 3(b), or the AI model may be switched corresponding to the combination of driving scenes including driving scenes not shown in FIG. 3(b).)
a scheduler unit, implemented using the one or more processors, configured to refer to a queue that stores a task for each model used by each task, read a task using a model that has become processable by switching by the control unit, and cause the accelerator to execute the task. ([Paragraph 39], In order to execute AI model operation processing, the AI model operation processing execution control unit 2105 transfers input data necessary to AI model operation to the accelerator 23, and delivers a control instruction relating to operation execution start. [Paragraph 52], The AI model may be switched for every some driving scenes from the driving scenes shown in FIG. 3(b), or the AI model may be switched corresponding to the combination of driving scenes including driving scenes not shown in FIG. 3(b). [Paragraph 87], The accelerator 23 has a configuration in which the AI model parameter information 231 shown in FIG. 2 is removed. In this configuration, the accelerator 23 does not keep holding AI model parameter information, and AI model parameter information is transferred to the accelerator 23 for every AI model operation process execution. However, a configuration may be provided in which the accelerator 23 keeps holding this information.)
Although Kitani discloses execution of task(s) by model(s).
Kitani does not explicitly disclose a scheduler unit, implemented using the one or more processors, configured to refer to a queue that stores a task for each model used by each task, read a task using a model.
Mannar teaches a scheduler unit, implemented using the one or more processors, configured to refer to a queue that stores a task for each model used by each task, read a task using a model. ([Column 15 line 17-42], In some embodiments, the priority queue 214, scheduler 216, and the reprioritizer 218 may operate in accordance with a machine learning model configured to prioritize, schedule, and monitor task(s)/sub-task(s). The machine learning model may be a supervised or unsupervised machine learning model, and may include any suitable machine learning algorithm. Accordingly, the machine learning model may develop correlations between the types of tasks/sub-tasks received at the priority queue 214 and the types of computing resources available (via the resource IDs) to more efficiently and effectively schedule received task(s)/sub-task(s). These correlations may work to minimize the amount of reprioritization required from the reprioritizer 218 and the overall amount of wasted processing time/power resulting from not transmitting task(s)/sub-task(s) to an optimal computing resource during an initial prioritization and scheduling of the task(s)/sub-task(s). [Column 14 line 6-26], In some embodiments, the priority queue 214 may include multiple queues for tasks/sub-tasks based upon the type of task/sub-task (e.g., data prep, gird searching, training, etc.), processing requirements (e.g., robust processor, mid-level processor, etc.), and/or any other suitable categorization(s) or combinations thereof.)
It would have been obvious to a person with ordinary skill in the art, before the effective filing date of the invention, to combine the teachings of Kitani wherein a model is selected and acquired to execute a task, accelerator setting(s) is/are switched to process the acquired model and the task is executed, into teachings of Manna wherein a scheduler prioritizes/reprioritizes/assigns task(s) into queue(s), because this would enhance the teachings of Kitani wherein by assigning task(s) into an appropriate queue for execution based on task type, priority, processing requirements, etc., it allows tasks to be executed an optimal model based on various factors, including priority.
As per claim 2, rejection of claim 1 is incorporated:
Kitani teaches wherein the control unit is configured to perform scheduling processing of switching the setting of the accelerator in such a manner that a number of times of switching of the accelerator is reduced by causing the accelerator to continuously execute a plurality of tasks using the same model, and a waiting time from an arrival time of a task stored in the queue is shortened. ([Paragraph 149], The priority level is imparted based on the relative relationship between the host vehicle and the object. Thus, in consideration of the priority level of the object, processing time necessary to AI model operation processing including a neural network can be reduced. [Paragraph 33], The prediction execution control unit 210 is configured of a computing unit configured to compute operation processing time by an AI model (an AI model operation processing time computing unit) 2100, a determining unit configured to determine whether operation processing time by the AI model exceeds a predetermined time period (an AI model operation processing time excess determining unit) 2101, an acquiring unit configured to acquire the status of the electronic controller (an electronic controller status acquiring unit) 2102, a selecting unit 2103 configured to select an AI model, AI model operation processing unit enabling option setting unit 2104 configured to set enabling a unit used for AI model operation processing or disabling a unit not used, an AI model operation processing execution control unit 2105, and an AI model use determining unit 2106. [Paragraph 49], In the combination of the object number and the AI model shown in FIG. 3(a), in the case in which the object number n is 10≤n, for example, i.e., in the case in which the number of times of repeatedly executing AI model operation processing, in order to suppress processing time necessary to AI model operation, ID=M003 that is an AI model having a short processing time is used, although operation accuracy is slightly degraded. In the case in which the object number n is 0≤n<5, ID=M001 that is an AI model having a long processing time and high operation accuracy, compared with ID=M003, is used. In the case in which the object number n is 5≤n<10, ID=M002 in the middle is used. The combination of the object number and the AI model shown in FIG. 3(a) is an example, but not limited to this.)
As per claim 3, rejection of claim 1 is incorporated:
Kitani teaches wherein a turn around time (TAT) requirement that is a time limit required from an arrival time of a task to an end time of the task is specified for each task, and
the control unit is configured to set switching from a current model to another model in the accelerator in a case where an end time of the another model when the current model is switched to the another model on a basis of a switching time of the accelerator from the current model operating in the accelerator to the another model and an execution time of a task of the another model exceeds a deadline time calculated from the TAT requirement of the another model. ([Paragraph 67], These AI models are properly used corresponding to the status of the vehicle electronic controller 20 shown in FIG. 3, and hence processing can be completed within a predetermined time period (within the deadline) for desired processing completion corresponding to the status of the vehicle electronic controller 20. [Paragraph 34], The AI model operation processing time computing unit 2100 computes the estimation of operation processing time by an AI model 71 shown in FIG. 4(e), described later. For estimation computing, the evaluation result of AI model operation processing determined in advance in the design stage of an AI model is used. At the point in time of completion of the design of the AI model, AI model structure information or AI model parameter information is uniquely determined. For operation processing, an exclusive accelerator is used. Therefore, the estimation of AI model operation processing time is possible. [Paragraph 49], In the combination of the object number and the AI model shown in FIG. 3(a), in the case in which the object number n is 10≤n, for example, i.e., in the case in which the number of times of repeatedly executing AI model operation processing, in order to suppress processing time necessary to AI model operation, ID=M003 that is an AI model having a short processing time is used, although operation accuracy is slightly degraded. In the case in which the object number n is 0≤n<5, ID=M001 that is an AI model having a long processing time and high operation accuracy, compared with ID=M003, is used. In the case in which the object number n is 5≤n<10, ID=M002 in the middle is used. The combination of the object number and the AI model shown in FIG. 3(a) is an example, but not limited to this. [Paragraph 51], In this case, although in open road driving, the operation accuracy of object recognition using AI models or the behavior prediction of objects is slightly degraded, compared with expressway driving, processing time has to be decreased. Therefore, in open road driving, M005 that is an AI model having a short processing time, compared with in expressway driving, is selected, whereas in expressway driving, M004 that is an AI model having high operation accuracy and a long processing time, compared with in open road driving, is selected. At the intersection, M006 that is an AI model having a shorter processing time than in the open road is selected. At the high-frequency accident location, M007 that is an AI model having a slightly shorter processing time than in the open road is selected. In the parking lot, M008 that is an AI model having a longer processing time than in the expressway is selected.)
As per claim 4, rejection of claim 3 is incorporated:
Kitani teaches wherein each process is configured to issue one or more tasks, a throughput (TP) requirement that defines a processing amount of a task per unit time is designated for each task, and when a new process is configured to occur in addition to a current process being deployed, the control unit is configured to determine that the new process is deployable based on scheduling processing being configured to satisfy the TP requirement and the TAT requirement of the current process in addition to the TP requirement and the TAT requirement of the new process. ([Paragraph 67], These AI models are properly used corresponding to the status of the vehicle electronic controller 20 shown in FIG. 3, and hence processing can be completed within a predetermined time period (within the deadline) for desired processing completion corresponding to the status of the vehicle electronic controller 20. [Paragraph 34], The AI model operation processing time computing unit 2100 computes the estimation of operation processing time by an AI model 71 shown in FIG. 4(e), described later. For estimation computing, the evaluation result of AI model operation processing determined in advance in the design stage of an AI model is used. At the point in time of completion of the design of the AI model, AI model structure information or AI model parameter information is uniquely determined. For operation processing, an exclusive accelerator is used. Therefore, the estimation of AI model operation processing time is possible. [Paragraph 49], In the combination of the object number and the AI model shown in FIG. 3(a), in the case in which the object number n is 10≤n, for example, i.e., in the case in which the number of times of repeatedly executing AI model operation processing, in order to suppress processing time necessary to AI model operation, ID=M003 that is an AI model having a short processing time is used, although operation accuracy is slightly degraded. In the case in which the object number n is 0≤n<5, ID=M001 that is an AI model having a long processing time and high operation accuracy, compared with ID=M003, is used. In the case in which the object number n is 5≤n<10, ID=M002 in the middle is used. The combination of the object number and the AI model shown in FIG. 3(a) is an example, but not limited to this. [Paragraph 51], In this case, although in open road driving, the operation accuracy of object recognition using AI models or the behavior prediction of objects is slightly degraded, compared with expressway driving, processing time has to be decreased. Therefore, in open road driving, M005 that is an AI model having a short processing time, compared with in expressway driving, is selected, whereas in expressway driving, M004 that is an AI model having high operation accuracy and a long processing time, compared with in open road driving, is selected. At the intersection, M006 that is an AI model having a shorter processing time than in the open road is selected. At the high-frequency accident location, M007 that is an AI model having a slightly shorter processing time than in the open road is selected. In the parking lot, M008 that is an AI model having a longer processing time than in the expressway is selected.)
Mannar also teaches ([Column 7 line 14-23], Such data may include data related to task performance criteria, processing resources required/used, billable rates associated with processing resources, anticipated/extrapolated task performance timelines, etc. [Column 11 line 1-16], The priority level associated with the sub-task may indicate a relative timeline for completion of the sub-task, and/or how the scheduler 114 should modify the schedule position of the sub-task to in view of the relative timeline for completion or the priority level of another sub-task. The resource requirement associated with the sub-task may indicate a number of processors/available memory (e.g., CPUs, GPUs, etc.) required or estimated to be required to complete the sub-task. The schedule position for the sub-task may indicate the sub-task's position within a queue of sub-tasks waiting to begin processing. For example, a sub-task may be fifth in line to begin processing in a certain processor, and as a result, may have a schedule position of five. [Column 12 line 53-67], In reference to the above example, the status module 128 may request a status update from the private domain 122 regarding sub-tasks A and B, and may request a status update from the public cloud 126 regarding sub-task C. The nodes from the private domain 122 processing sub-tasks A and B may submit respective processing status updates indicating that sub-task A is 50% complete and may require 3 additional hours of processing time, and that sub-task B is 65% complete and may require 2 additional hours of processing time. The node from the public cloud 126 processing sub-task C may submit a processing status update indicating that sub-task C is 20% complete and may require 6 additional hours of processing time. The status module 128 may forward these processing status updates to the task database 110, where they may be stored as a set of updates for the task request including sub-tasks A, B, and C.)
As per claims 5 and 6, these are method and non-transitory computer readable medium claims corresponding to the device claim 1. Therefore, rejected based on similar rationale.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DONG U KIM whose telephone number is (571)270-1313. The examiner can normally be reached 9:00am - 5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bradley Teets can be reached at 5712723338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DONG U KIM/Primary Examiner, Art Unit 2197