Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claim 21 is objected to because of the following informalities: reference numbers are throughout the claim. Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-10, 12-17 and 21 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention.
Regarding claim 1, the claim recites the limitation “NPU controller for monitoring a current resource of the plurality of Al sources..”. However the specification paragraph [0068] states that NPU driver monitors the current resource of available AI sources, paragraph [101] states that the NPU controller monitors the available AI sources. Because the specification attributes the same function to two components, it is unclear which component performs the recited operation.
Regarding claim 21, the claim recites similar limitation as corresponding claim 1 and is rejected for similar reasons as claim 1 using similar rationale.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Kim (EP 3 944 628 A1) in view of Pieper (US 11978181 B1).
Regarding claim 18, Kim teaches:
A method for processing an Al operation, in a camera device provided with a processor and a memory storing instructions executable by the processor, wherein the method is performed by the instructions controlled by the processor and the method comprises. ([0040] FIG. 2 is a block diagram illustrating an embodiment of any one of artificial intelligence cameras of FIG. 1. FIG. 3 is a block diagram illustrating an embodiment of a main processor of FIG. 2. FIG. 4 is a block diagram illustrating an embodiment of an artificial intelligence 5 processor of FIG. 2. )
capturing an image of a subject. ([0043] The image sensor 210 may operate in response to the control of the main processor 250. The image sensor 210 is configured to convert an optical signal received through the lens 205 into an electrical signal and digitize the converted electrical signal to generate an image. For example, the image sensor 210 may include an analog to-digital converter configured to convert an analog image signal into digital image data.)
requesting an Al operation on the captured image; ([0045] The main processor 250 controls general operations of the artificial intelligence camera 200. The main processor 250 may communicate with the components connected to the network 50 (see FIG. 1) through the network interface 260. The main processor 250 is con figured to appropriately process the image received from the image sensor 210, and may transmit the processed image to the user terminal 110 through the network interface 260 as a live-view.)
and performing video analysis on the captured image by using a processing result of the Al operation. ([0079] In S420, the artificial intelligence camera 200 accesses the allocated normal camera to receive an image captured by the allocated normal camera. [0080] In S430, the artificial intelligence camera 200 analyzes the received image using the artificial intelligence processor 270. [0081] In S440, the artificial intelligence camera 200 transmits result information according to the analysis of the image to the image management server 140 together with an identifier of the allocated normal camera. The artificial intelligence camera 200 may include the result information according to the analysis of the image and the identifier of the normal camera in a second metadata META2 field of the output data DOUT illustrated in FIG. 5)
Kim does appear to explicitly teach: monitoring a current resource of available Al sources including a plurality of Al sources embedded in the camera device and an Al source external to the camera device according to the Al operation request; processing an Al operation by at least some of the available Al sources according to priorities of the available Al sources and an idle amount of the current resource;
However, Pieper teaches: col 32, line 34 – col 33, line 6. In at least one embodiment, as shown in FIG. 9, data center infrastructure layer 910 may include a resource orchestrator 912, grouped computing resources 914, and node computing resources (“node C.R.s”) 916(1)-916(N), where “N” represents a positive integer (which may be a different integer “N” than used in other figures). In at least one embodiment, node C.R.s 916(1)-916(N) may include, but are not limited to, any number of central processing units (“CPUs”) or other processors (including accelerators, field programmable gate arrays (FPGAs), graphics processors, etc.), memory storage devices 918(1)-918(N) (e.g., dynamic read-only memory, solid state storage or disk drives), network input/output (“NW I/O”) devices, network switches, virtual machines (“VMs”), power modules, and cooling modules, etc. In at least one embodiment, one or more node C.R.s from among node C.R.s 916(1)-916(N) may be a server having one or more of above-mentioned computing resources. In at least one embodiment, grouped computing resources 914 may include separate groupings of node C.R.s housed within one or more racks (not shown), or many racks housed in data centers at various geographical locations (also not shown). In at least one embodiment, separate groupings of node C.R.s within grouped computing resources 914 may include grouped compute, network, memory or storage resources that may be configured or allocated to support one or more workloads. In at least one embodiment, several node C.R.s including CPUs or processors may grouped within one or more racks to provide compute resources to support one or more workloads. In at least one embodiment, one or more racks may also include any number of power modules, cooling modules, and network switches, in any combination. In at least one embodiment, resource orchestrator 912 may configure or otherwise control one or more node C.R.s 916(1)-916(N) and/or grouped computing resources 914. In at least one embodiment, resource orchestrator 912 may include a software design infrastructure (“SDI”) management entity for data center 900. In at least one embodiment, resource orchestrator 112 may include hardware, software or some combination thereof. Col 19, line 56- col 20, line 8. FIG. 5A is a flow diagram of a process 500 to perform a machine vision task based on one or more image having accurate or absolute luminance values, in accordance with at least one embodiment. In at least one embodiment, one or more processor of an image signal processing (ISP) pipeline of a camera generates absolute luminance values for an image received from a sensor of a camera, such that each pixel of image may have a corresponding accurate luminance value. In at least one embodiment, a processing logic may use one or more trained neural networks to process luminance values of an image to identify objects in said image, to make predictions based on said image, a make decisions (e.g., driving decisions) based on said image, and so on. In at least one embodiment, alternatively or in addition to using trained neural networks, processing logic may use traditional computer vision techniques, such as Scale Invariant Feature Transform (SIFT), Speeded Up Robust Features (SURF), to process luminance values of an image to identify and/or detect objects within said image.
Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Kim and Pieper before them, to include Pieper’s resource monitoring and orchestration techniques in Kim’s AI camera system. One would have been motivated to make such a combination to more efficiently manage AI processing resources by monitoring current resources and selecting alternatives when selected resource become constrained.
Regarding claim 19, Pieper teaches:
The method of claim 18, wherein processing the Al operation comprises, monitoring a current resource of available Al sources including the embedded plurality of Al sources and the Al source external to the camera device; and selecting an Al source having the highest priority from among Al sources, in which value of the monitored current resource is equal to or greater than a threshold value. ([0091] When the artificial intelligence camera 200 analyzes the image of the normal camera, the artificial intelligence camera 200 may variably control a length of the image received from the normal camera through S422 and S423 depending on whether or not a probability according to the deep learning (or the artificial intelligence) is a threshold value or more. The artificial intelligence processor 270 may output, for example, the prob ability according to the deep learning together with image analysis through the deep learning, and when this prob ability is the threshold value or more, it may mean that the image analysis is completed. When the probability according to the deep learning is the threshold value or more, additional image analysis is not required, and thus, the artificial intelligence camera 200 and/or the main processor 250 may variably adjust the length of the image received from the normal camera depending on whether or not the probability according to the deep learning is the threshold value or more. The artificial intelligence camera 200 may receive an image having a relatively short length or an image having a relatively long length depending on whether or not the probability according to the deep learning is the threshold value or more. For example, the artificial intelligence camera 200 may re quest and receive an additional image when the additional image is required, and in this case, a length of the additional image, the number of times to the request for the additional image, and the like, may be variably adjusted depending on the probability according to the deep learning. [0092] When the artificial intelligence camera 200 receives a still image from the normal camera, the artificial intelligence camera 200 may request an additional still image depending on whether or not the probability according to the deep learning is the threshold value or more, and when the artificial intelligence camera 200 receives a dynamic image from the normal camera, the artificial intelligence camera 200 may request the normal camera to stop transmitting the dynamic image depending on whether or not the probability according to the deep learning is the threshold value or more. [0093] As such, the artificial intelligence camera 200 may variably control the length of the image received from the normal camera depending on whether or not the probability according to the deep learning is the threshold or more. Therefore, a time required for changing or restoring the setting values of the normal camera may also be shortened.)
Regarding claim 20, Pieper teaches:
The method of claim 19, wherein processing the Al operation comprises, after selecting the Al source, monitoring a current resource of the selected Al source and selecting an Al source having a next priority to the selected Al source in response to value of the current resource falling below the threshold value. (col 125, line 42- col 126, line 2.In at least one embodiment, AI services 3718 may be leveraged to perform inferencing services for executing machine learning model(s) associated with applications (e.g., tasked with performing one or more processing tasks of an application). In at least one embodiment, AI services 3718 may leverage AI system 3724 to execute machine learning model(s) (e.g., neural networks, such as CNNs) for segmentation, reconstruction, object detection, feature detection, classification, and/or other inferencing tasks. In at least one embodiment, applications of deployment pipeline(s) 3710 may use one or more of output models 3616 from training system 3604 and/or other models of applications to perform inference on imaging data (e.g., DICOM data, RIS data, CIS data, REST compliant data, RPC data, raw data, etc.). In at least one embodiment, two or more examples of inferencing using application orchestration system 3728 (e.g., a scheduler) may be available. In at least one embodiment, a first category may include a high priority/low latency path that may achieve higher service level agreements, such as for performing inference on urgent requests during an emergency, or for a radiologist during diagnosis. In at least one embodiment, a second category may include a standard priority path that may be used for requests that may be non-urgent or where analysis may be performed at a later time. In at least one embodiment, application orchestration system 3728 may distribute resources (e.g., services 3620 and/or hardware 3622) based on priority paths for different inferencing tasks of AI services 3718.)
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CARLOS A ESPANA whose telephone number is (703)756-1069. The examiner can normally be reached Monday - Friday 8 a.m - 5 p.m EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, LEWIS BULLOCK JR can be reached at (571)272-3759. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/C.A.E./Examiner, Art Unit 2199
/LEWIS A BULLOCK JR/Supervisory Patent Examiner, Art Unit 2199