DETAILED ACTION
Claims 1-25 are presented for examination.
This office action is in response to submission of application on 30-OCTOBER-2025.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 30-OCTOBER-2025 has been entered.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 3-MARCH-2021 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
The information disclosure statement (IDS) submitted on 23-MARCH-2021 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
The information disclosure statement (IDS) submitted on 5-MAY-2021 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
The information disclosure statement (IDS) submitted on 21-OCTOBER-2021 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
The information disclosure statement (IDS) submitted on 24-NOVEMBER-2021 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
The information disclosure statement (IDS) submitted on 3-JANUARY-2022 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
The information disclosure statement (IDS) submitted on 4-MARCH-2022 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Response to Amendment
The amendment filed 30-OCTOBER-2025 in response to the previous office action mailed 31-JULY-2025 has been entered. Claims 1-25 remain pending in the application.
With regards to the non-final office action’s rejections under 103, the amendments to the claims necessitated a new consideration of the art. After this consideration, the examiner respectfully disagrees with the applicant’s arguments that the art referenced in the previous office action does not teach the amendment claim limitations. A new 103 rejection over the prior art has been provided. Regarding the applicant’s arguments for the 103 rejection, the examiner respectfully requests consideration of the following:
Elter discloses image processing circuit in an edge device, comprising: an artificial intelligence (AI) processor to identify one or more attributes from an input image sequence using one or more models stored in a memory of an edge device:
Elter states: “The feature extractor is configured to extract a feature set for each of the identified contiguous regions from the color capture.”
Elter further states: “To ensure consistent lighting conditions and white balance, all images were taken care of. Fig. 10 shows a part of a sample image from the data set. This image set was randomly divided into two subsets (each with 128 images) for training and testing”
Elter teaches a feature extractor that is used to extract a feature set from an image in order to identify objects within the image. The feature extractor identifies objects in a scene, which is a type of one or more attributes. Furthermore, in the second quotation of Elter, it will be appreciated that in the context of a feature set, training, and testing Elter demonstrates the use of an artificial intelligence processor wherein the AI model processes an input image sequence as in the training and testing set.
Elter does not teach an edge device, which is taught further below by Pezzillo.
Elter discloses a picture quality (PQ) engine to generate an output image sequence for display by performing image enhancement operations the input image sequence based on the identified:
Elter states: “The gray value image intensifier 16 can carry out the filtering in such a way that a contrast between regions of the color image 107”
Elter further states: “The gray value image intensifier 16 is designed to filter the gray level image 15 to obtain an enhanced gray value image 19 as a result of the filtering […] In other words, by applying filtering to the gray scale image 115, the gray scale image enhancer 16 can amplify differences between the objects to be determined in the color image 107 and objects to be determined”
Elter teaches a gray value intensifier that adjusts the contrast of the image in such a way as the amplify contrast between the regions, wherein the regions are the identified attribute. Adjusting the contrast in order to enable object identification in the image would be an example of image enhancement, and therefore the gray value intensifier is a picture quality engine. As previously discussed, Elter discloses this process for a sequence of images as well.
Elter discloses one or more attributes, wherein different image processing algorithms are used for enhancing different attributes:
Elter recites: “For example, the characterizer 114 may be designed to perform this characterization based on a specially trained reference database based on shape features, texture features, and / or color features.”
Elter recites: “the device 110 may also comprise a display, for example a monitor, which displays the specific objects.”
Elter here describes the process of using a different characterization algorithm – i.e., one designed based on on a respective specially trained reference database -– based on a respective different attributes as seen the different features. Furthermore, Elter also teaches that the specific objects within the images which are characterized are display, meaning that the output of the characterizer becomes part of the image enhancement.
Regarding the limitation a data collection module to generate labeled images based on the input image sequence labeled with the identified one or more attributes, and to add the labeled images to a training database stored in the memory:
Elter states: “By way of example, the reference feature sets can be trained in advance by hand to the classifier 103, for example based on a database of objects that have already been determined in advance (manually)”
Elter teaches reference feature sets in a database, which may for example by object determined in advance. The objects are one of more identified attribute, and therefore the reference feature sets added to the database would be the labeled images and an input image sequence.
However, Pezzillo discloses wherein the AI processor is further operative to re-train the one or more models using the labeled images in training database and use the re-trained model for attribute identification in subsequent input images:
Pezzillo in the same field of endeavor of image processing teaches re-training a machine learning model (Paragraph 47). In combination with the device of Elter that enables the previously taught model the uses labeled images in a training data base and attribute identification, it would be obvious to use the re-trained model of Pezzillo as this model for reasons of the improvement described further below.
Elter and Pezzillo are analogous art to the present application because they are all in the same field of endeavor, image processing.
Furthermore, Pezzillo discloses an edge device:
Pezzillo teaches the use of an edge device used for executing machine learning models (Paragraph 114), which could be used to run the described teachings of Elter for reasons of the improvement described below.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a device that utilized the teachings of Elter and the teachings of Pezzillo. This would have provided the advantage of improving application execution results and efficiency (Pezzillo, Paragraph 30).
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitations uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are:
Claim 1: the attribute identification engine; the picture quality engine; the data collection module; the training engine
Claim 3: the control module
Because these claim limitations are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have these limitations interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitations recite sufficient structure to perform the claimed function so as to avoid them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-25 are rejected under 35 U.S.C. 103 as being unpatentable over Elter (Pub. No. WO 2012085079 A2, filed December 21st 2011, hereinafter Elter) in view of Pezzillo et al. (Pub. No. US 20190370687 A1, filed May 31st 2019, hereinafter Pezzillo).
Regarding claim 1:
Claim 1 recites:
An image processing circuit in an edge device, comprising: an artificial intelligence (AI) processor to identify one or more attributes from an input image sequence using one or more models stored in a memory of an edge device; a picture quality (PQ) engine to generate an output image sequence for display by performing image enhancement operations the input image sequence based on the identified one or more attributes, wherein different image processing algorithms are used for enhancing different attributes; and a data collection module to generate labeled images based on the input image sequence labeled with the identified one or more attributes, and to add the labeled images to a training database stored in the memory; wherein the AI processor is further operative to re-train the one or more models using the labeled images in training database and use the re-trained model for attribute identification in subsequent input images.
Elter discloses image processing circuit in an edge device, comprising: an artificial intelligence (AI) processor to identify one or more attributes from an input image sequence using one or more models stored in a memory of an edge device:
Elter states: “The feature extractor is configured to extract a feature set for each of the identified contiguous regions from the color capture.”
Elter further states: “To ensure consistent lighting conditions and white balance, all images were taken care of. Fig. 10 shows a part of a sample image from the data set. This image set was randomly divided into two subsets (each with 128 images) for training and testing”
Elter teaches a feature extractor that is used to extract a feature set from an image in order to identify objects within the image. The feature extractor identifies objects in a scene, which is a type of one or more attributes. Furthermore, in the second quotation of Elter, it will be appreciated that in the context of a feature set, training, and testing Elter demonstrates the use of an artificial intelligence processor wherein the AI model processes an input image sequence as in the training and testing set.
Elter does not teach an edge device, which is taught further below by Pezzillo.
Elter discloses a picture quality (PQ) engine to generate an output image sequence for display by performing image enhancement operations the input image sequence based on the identified:
Elter states: “The gray value image intensifier 16 can carry out the filtering in such a way that a contrast between regions of the color image 107”
Elter further states: “The gray value image intensifier 16 is designed to filter the gray level image 15 to obtain an enhanced gray value image 19 as a result of the filtering […] In other words, by applying filtering to the gray scale image 115, the gray scale image enhancer 16 can amplify differences between the objects to be determined in the color image 107 and objects to be determined”
Elter teaches a gray value intensifier that adjusts the contrast of the image in such a way as the amplify contrast between the regions, wherein the regions are the identified attribute. Adjusting the contrast in order to enable object identification in the image would be an example of image enhancement, and therefore the gray value intensifier is a picture quality engine. As previously discussed, Elter discloses this process for a sequence of images as well.
Elter discloses one or more attributes, wherein different image processing algorithms are used for enhancing different attributes:
Elter recites: “For example, the characterizer 114 may be designed to perform this characterization based on a specially trained reference database based on shape features, texture features, and / or color features.”
Elter recites: “the device 110 may also comprise a display, for example a monitor, which displays the specific objects.”
Elter here describes the process of using a different characterization algorithm – i.e., one designed based on on a respective specially trained reference database -– based on a respective different attributes as seen the different features. Furthermore, Elter also teaches that the specific objects within the images which are characterized are display, meaning that the output of the characterizer becomes part of the image enhancement.
Regarding the limitation a data collection module to generate labeled images based on the input image sequence labeled with the identified one or more attributes, and to add the labeled images to a training database stored in the memory:
Elter states: “By way of example, the reference feature sets can be trained in advance by hand to the classifier 103, for example based on a database of objects that have already been determined in advance (manually)”
Elter teaches reference feature sets in a database, which may for example by object determined in advance. The objects are one of more identified attribute, and therefore the reference feature sets added to the database would be the labeled images and an input image sequence.
However, Pezzillo discloses wherein the AI processor is further operative to re-train the one or more models using the labeled images in training database and use the re-trained model for attribute identification in subsequent input images:
Pezzillo in the same field of endeavor of image processing teaches re-training a machine learning model (Paragraph 47). In combination with the device of Elter that enables the previously taught model the uses labeled images in a training data base and attribute identification, it would be obvious to use the re-trained model of Pezzillo as this model for reasons of the improvement described further below.
Elter and Pezzillo are analogous art to the present application because they are all in the same field of endeavor, image processing.
Furthermore, Pezzillo discloses an edge device:
Pezzillo teaches the use of an edge device used for executing machine learning models (Paragraph 114), which could be used to run the described teachings of Elter for reasons of the improvement described below.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a device that utilized the teachings of Elter and the teachings of Pezzillo. This would have provided the advantage of improving application execution results and efficiency (Pezzillo, Paragraph 30).
Regarding claim 2, which depends upon claim 1:
Claim 2 recites:
The image processing circuit of claim 1, wherein the AI processor is further operative to execute a machine-learning or deep- learning algorithm to identify the one or more attributes.
Elter in view of Pezzillo disclose the limitations of claim 1 upon which claim 2 depends. Furthermore, regarding the limitation of claim 2:
Elter states: “A Bayesian pixel classification is used to find plasmodium candidates in the first stage”
Elter teaches that the feature extractor that identifies objects in the images uses pixel classification, a type of machine learning algorithm.
Regarding claim 3, which depends upon claim 1:
Claim 3 recites:
The image processing circuit of claim 1, further comprising a control module to control re- training of the one or more models on the edge device based on an event or a periodic schedule.
Elter in view of Pezzillo disclose the limitations of the device of claim 1 upon which claim 3 depends. However, Elter does not teach the limitation of claim 3:
Pezzillo teaches that a drop in performance may trigger re-training of the machine learning model (Paragraph 47). This would be an example of an event re-training the model is based on. Pezzillo further states that the ML model manager may also be executed in on another computing system (Paragraph 33), which would be an edge device.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a device that utilized the teachings of Elter and the teachings of Pezzillo. This would have provided the advantage of improving application execution results and efficiency (Pezzillo, Paragraph 30).
Regarding claim 4, which depends upon claim 1:
Claim 4 recites:
The image processing circuit of claim 1, wherein the data collection module is further operative to: receive, via a user interface, a user-identified attribute which changes a given attribute in an input image; and generate a labeled image based on the input image labeled with the user-identified attribute.
Elter in view of Pezzillo disclose the limitations of claim 1 upon which claim 4 depends. Furthermore, regarding the limitation of claim 4:
Elter states: “For example, the database may be created such that a user manually marks portions of a color image as an object or non-object. […] The user may have the opportunity to correct misreading or mis-characterization using drag and drop”
Elter teaches that a user has the ability to correct the misreading of objects within the image via a drag and drop. This would be an example of the user identifying an attribute to change for the input image, therefore providing a given attribute. Correction of the image would be generation of an image based on the user-identified attribute.
Regarding claim 5, which depends upon claim 4:
Claim 5 recites:
The image processing circuit of claim 4, wherein the user interface provides a list of options with respect to the given attribute for selection by a user in response to an indication of poor image quality from the user.
Elter in view of Pezzillo disclose the limitations of claim 4 upon which claim 5 depends. Furthermore, regarding the limitation of claim 5:
Elter states: “For example, the database may be created such that a user manually marks portions of a color image as an object or non-object. […] The user may have the opportunity to correct misreading or mis-characterization using drag and drop”
Elter teaches that a user has the ability to correct the misreading of objects within the image via a drag and drop. This would be an example of a list of options that the user has to chose from for correcting the object identification, which is an attribute of the image. Furthermore, an image with incorrect object identification would be poor image quality as it fails to capture necessary attributes.
Regarding claim 6, which depends upon claim 5:
Claim 6 recites:
The image processing circuit of claim 4, wherein the data collection module is further operative to: retrieve a plurality of sample images from the training database, each sample image having a confidence level exceeding a predetermined threshold with respect to the user-identified attribute; and providing each sample image for the user to label to thereby generate additional labeled images for the training database.
Elter in view of Pezzillo disclose the limitations of the device of claim 4 upon which claim 6 depends. Furthermore, regarding the limitation providing each sample image for the user to label to thereby generate additional labeled images for the training database:
Elter states: “For example, the database may be created such that a user manually marks portions of a color image as an object or non-object.”
Elter teaches a user manually identifies a portion of a color image as an object or non-object. This would be the user labeling a sample image to create an additional labeled image.
Furthermore, Pezzillo discloses wherein the data collection module is further operative to: retrieve a plurality of sample images from the training database, each sample image having a confidence level exceeding a predetermined threshold with respect to the user-identified attribute:
Pezzillo teaches labeled observations that are associated with a confidence score of the result of a predictive machine learning model (Paragraph 34). This would be a sample image with a confidence level. Furthermore, Pezzillo teaches that feedback data including these labeled observations are received by a machine learning model manager (Paragraph 4) which here would act data collection model that receives a plurality of sample images wherein those sample images are the labeled observations.
Furthermore, Pezzillo also teaches the filtering by a threshold (Paragraph 64)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a device that utilized the teachings of Elter and the teachings of Pezzillo. This would have provided the advantage of improving application execution results and efficiency (Pezzillo, Paragraph 30).
Regarding claim 7, which depends upon claim 1:
Claim 7 recites:
The image processing circuit of claim 1, wherein the data collection module is further operative to: automatically label an input image with a given attribute when a confidence level with respect to the given attribute exceeds a predetermined threshold; and update the training database with automatically labeled images.
Elter in view of Pezzillo disclose the limitations of the device of claim 1 upon which claim 7 depends. However, Elter does not teach the limitation of claim 7:
Pezzillo teaches that confidence scores can be part of feedback data that is coupled with captured observations and derived input data to yield new labeled observations (Paragraph 73). This would be the labeling of an input image with regards to a confidence level, which would be the give image attribute of Pezzillo. Furthermore, these are sent to the machine learning model manager (Paragraph 73), which would be updating the database with the automatically labeled images.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a device that utilized the teachings of Elter and the teachings of Pezzillo. This would have provided the advantage of improving application execution results and efficiency (Pezzillo, Paragraph 30).
Regarding claim 8, which depends upon claim 1:
Claim 8 recites:
The image processing circuit of claim 1, wherein the one or more attributes of the input image sequence include one or more of: a scene type, an object type in a scene, contrast information, luminance information, edge directions or strength, noise information, segmentation information, and motion information.
Elter in view of Pezzillo disclose the limitations of claim 1 upon which claim 8 depends. Furthermore, regarding the limitation of claim 8:
Elter states: “It is an object of the present invention to provide a concept which allows for improved detection of cells in a receptacle such as plasmodia in a thick blood smear sample”
Elter additionally states: “For example, the database may be created such that a user manually marks portions of a color image as an object or non-object.”
Elter teaches the identification of an object or non-object is within an image. Specifically, Elter provides an example where the image is of a blood smear and the object is a cluster of cells. The image would be a scene and the cluster of cells a type of object within the image. Furthermore, as discussed in claim 1, the process undergone by the image of Elter may be applied to a sequence of images.
Regarding claim 9, which depends upon claim 1:
Claim 9 recites:
The image processing circuit of claim 1, wherein the AI processor is further operative to identify a plurality of attributes from the input image sequence according to a plurality of models, wherein each model is used for identifying one of the attributes.
Elter in view of Pezzillo disclose the limitations of the device of claim 1 upon which claim 9 depends. However, Elter does not teach the limitation of claim 9:
Pezzillo teaches that a plurality of devices receive a machine learning model which is trained at the device, and then sends feedback data back to a central machine learning model manager including labeled observations (Paragraph 76). The various feedback data and observations of the models at their respective devices would be the plurality of attributes wherein each model is sending different feedback data to the machine learning model manager. Therefore, each model is used for identifying one of the attributes of the total feedback data.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a device that utilized the teachings of Elter and the teachings of Pezzillo. This would have provided the advantage of improving application execution results and efficiency (Pezzillo, Paragraph 30).
Regarding claim 10, which depends upon claim 1:
Claim 10 recites:
The image processing circuit of claim 1, wherein the PQ engine is operative to perform the image enhancement operations including one or more of: de-noising, scaling, contrast adjustment, color adjustment, and sharpness adjustment.
Elter in view of Pezzillo disclose the limitations of claim 1 upon which claim 10 depends. Furthermore, regarding the limitation of claim 10:
Elter states: “In the step 204 of providing the binary image, the gray value image can be subjected to filtering in such a way that a contrast between regions of the color image in which the objects to be determined and areas of the color image in which the objects to be determined are amplified is not shown”
Elter teaches that a gray value image can be filtered in such a way that increases contrast between regions of the image. This would be an example of contrast adjustment, which would be one or more of de-noising, scaling, contrast adjustment, color adjustment, and sharpness adjustment.
Claims 11-20 recite a method that parallels the device of claims 1-10 respectively. Therefore, the analysis discussed above with respect to claims 1-10 also applies to claims 11-20 respectively. Accordingly, claims 11-20 are rejected based on substantially the same rationale as set forth above with respect to claims 1-10 respectively.
Regarding claim 21:
Claim 21 recites:
A method performed by an edge device for image enhancement, comprising: receiving, via a user interface, a user-identified attribute with respect to a displayed image; generating a labeled image based on the displayed image labeled with the user-identified attribute; adding the labeled image to a training database stored on the edge device; re-training, by an artificial intelligence (AI) processor in the edge device one or more models stored on the edge device using, at least in part, the labeled image in the training database; identifying, by the AI processor, one or more attributes from an input image sequence using the re-trained model; and generating an output image sequence for display by performing image enhancement operations on the input image sequence based the identified one or more attributes, wherein different image processing algorithms are used for enhancing different attributes.
Regarding the limitation method performed by an edge device for image enhancement, comprising: receiving, via a user interface, a user-identified attribute with respect to a displayed image:
Elter states: “For example, the database may be created such that a user manually marks portions of a color image as an object or non-object.”
Elter teaches a user manually identifies and marks (i.e., labels) a portion of a color image as an object or non-object. This would be a kind of attribute.
Elter does not teach an edge device. This is taught further below by Pezzillo.
Regarding the limitation generating a labeled image based on the displayed image labeled with the user-identified attribute; adding the labeled image to a training database stored on the edge device;
Elter states: “By way of example, the reference feature sets can be trained in advance by hand to the classifier 103, for example based on a database of objects that have already been determined in advance (manually)”
Elter teaches reference feature sets in a database, which may for example by object determined in advance. The objects are a kind of identified attribute, and therefore the reference feature sets added to the database would be the labeled images.
Elter does not teach an edge device. This is taught further below by Pezzillo.
Regarding the limitation generating an output image sequence for display by performing image enhancement operations on the input image sequence based the identified one or more attributes:
Elter states: “The gray value image intensifier 16 can carry out the filtering in such a way that a contrast between regions of the color image 107”
Elter further states: “The gray value image intensifier 16 is designed to filter the gray level image 15 to obtain an enhanced gray value image 19 as a result of the filtering […] In other words, by applying filtering to the gray scale image 115, the gray scale image enhancer 16 can amplify differences between the objects to be determined in the color image 107 and objects to be determined”
Elter teaches a gray value intensifier that adjusts the contrast of the image in such a way as the amplify contrast between the regions, wherein the regions are the identified attribute. Adjusting the contrast in order to enable object identification in the input image would be an example of image enhancement, wherein Elter has previously taught that this process may be performed on multiple images or an image sequences.
Elter discloses one or more attributes, wherein different image processing algorithms are used for enhancing different attributes:
Elter recites: “For example, the characterizer 114 may be designed to perform this characterization based on a specially trained reference database based on shape features, texture features, and / or color features.”
Elter recites: “the device 110 may also comprise a display, for example a monitor, which displays the specific objects.”
Elter here describes the process of using a different characterization algorithm – i.e., one designed based on on a respective specially trained reference database -– based on a respective different attributes as seen the different features. Furthermore, Elter also teaches that the specific objects within the images which are characterized are display, meaning that the output of the characterizer becomes part of the image enhancement.
However, Pezzillo discloses re-training, by an artificial intelligence (AI) processor in the edge device one or more models stored on the edge device using, at least in part, the labeled image in the training database; identifying, by the AI processor, one or more attributes from an input image sequence using the re-trained model:
Pezzillo in the same field of endeavor of image processing teaches re-training a machine learning model (Paragraph 47). In combination with the device of Elter that enables the previously taught model the uses labeled images in a training data base and attribute identification, it would be obvious to use the re-trained model of Pezzillo as this model for reasons of the improvement described further below.
Elter and Pezzillo are analogous art to the present application because they are all in the same field of endeavor, image processing.
Furthermore, Pezzillo discloses an edge device:
Pezzillo teaches the use of an edge device used for executing machine learning models (Paragraph 114), which could be used to run the described teachings of Elter for reasons of the improvement described below.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a device that utilized the teachings of Elter and the teachings of Pezzillo. This would have provided the advantage of improving application execution results and efficiency (Pezzillo, Paragraph 30).
Claims 22-25 recite a method that parallels the device of claims 5-8 respectively. Therefore, the analysis discussed above with respect to claims 5-8 also applies to claims 22-25 respectively. Accordingly, claims 22-25 are rejected based on substantially the same rationale as set forth above with respect to claims 5-8 respectively.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALEXANDRIA JOSEPHINE MILLER whose telephone number is (703)756-5684. The examiner can normally be reached Monday-Thursday: 7:30 - 5:00 pm, every other Friday 7:30 - 4:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mariela Reyes can be reached on (571) 270-1006. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/A.J.M./Examiner, Art Unit 2142
/HAIMEI JIANG/Primary Examiner, Art Unit 2142