DETAILED ACTION
Allowable Subject Matter
Claims 3-10, 15-19 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-2, 11, 13-14 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Takatsuka et al. US 2021/0192692 hereinafter referred to as Takatsuka in view of Li et al. US 2016/0269706 hereinafter referred to as Li in view of Sigal et al. US 9,477,908 hereinafter referred to as Sigal in view of Winn et al. US 2008/0075361 hereinafter referred to as Winn.
In regards to claim 1, Takatsuka teaches:
“A method for adjusting camera configurations in an electronic device, the method comprising: [capturing] an image of a scene to be obtained by a camera of the electronic device”
Takatsuka Figure 3 teaches capturing an image for performing the adjusting method.
“identifying relevant bounding boxes associated with the image, each relevant bounding box comprising a corresponding object”
Takatsuka teaches in paragraph [0133] the object region recognition unit 82 detects a region of an object that is a candidate to be detected and performs processing of recognizing a region (bounding box) surrounding an object to be detected in an image (frame) of the object to be detected.
“generating a set of reference parameters associated with the corresponding objects of the relevant bounding boxes by identifying a reference parameter for the corresponding object of each relevant bounding box of the relevant bounding boxes”
Takatsuka paragraph [0156] teaches the information of the classes and the information of the bounding box are provided to the parameter selection unit 84, and the parameter selection unit 84 selects one parameter set using the information of the classes from among stored parameter sets PR1, PR2, and the like.
“identifying one or more reference images and retrieving corresponding reference configurations associated with the identified one or more reference images based on the set of reference parameters”
Takatsuka paragraph [0163] and Figure 4A teaches deep learning is performed using a large number of human images as training data SD, and the parameter set PR1 with the highest image recognition rate in the viewpoint of recognition of a person is generated. Takatsuka paragraph [0163] teaches as illustrated in FIG. 4B the parameter sets PR1, PR2, PR3, and the like corresponding to the generated classes are stored so that the parameter selection unit 84 can select them.
“and configuring the camera of the electronic device based on the reference configurations of at least one of one or more reference images”
Takatsuka paragraph [0151] teaches as the classified image adaptation processing, appropriate parameters (image quality adjustment values) are stored for each class of an object that can be targeted. Then, for the image captured by the array sensor 2, object detection and class identification of a detected object are performed, the parameters are selected according to the identified class and set in the logic unit 5, and the processing using the parameters is performed in the logic unit 5.
Takatsuka teaches:
“wherein the retrieving the corresponding reference configurations associated with the identified one or more reference images comprises retrieving the corresponding reference configurations from a storage”
Takatsuka paragraph [0138] teaches the parameter selection unit 84 stores the parameters for signal processing according to each class, and selects corresponding one or a plurality of parameters using the class or the bounding box of the detected object identified by the class identification unit 83, for example. Then, the parameter selection unit 84 sets the one or the plurality of parameters in the logic unit 5. Takatsuka paragraph [0163] teaches in a case of generating the parameter set of the class “person”, as illustrated in FIG. 4A, deep learning is performed using a large number of human images as training data SD, and the parameter set PR1 with the highest image recognition rate in the viewpoint of recognition of a person is generated. Takatsuka paragraph [0165] and Figure 4B teach the parameter sets PR1, PR2, PR3, and the like corresponding to the generated classes are stored so that the parameter selection unit 84 can select them. The Examiner interprets that the reference images are the training images for example, human images. Based on this it is interpreted that the retrieved corresponding reference configurations, e.g. PR1, correspond to the identified one or more refences images, e.g. human images.
Takatsuka does not explicitly teach:
“rendering [an image]”
However, rendering an image prior to performing adjustments is a routine implementation. For example, Li teaches in paragraph [0065] the preview window displays a preview image of a scene photographed by the camera device. The present disclosure allows the user to select a part of the preview image by the selection box for a user-defined colour temperature value, and acquire a colour temperature value of the selected part. It would have been obvious for a person with ordinary skill in the art before the invention was effectively filed to have modified Takatsuka in view of Li to have included the features of “rendering [an image]” because in the preset environment adjustment mode, it is hard to acquire various highly-individualized photographic works. The individual needs of users cannot be satisfied Nubia [0002]).
Takatsuka/Li do not explicitly teach:
“the corresponding object of each relevant bounding box comprising a co-occurrence value” and “bounding box”
However, Sigal column 6 lines 16-30 teaches in one embodiment, models may be learned separately for each object-level category. For the c-th object-level category, the model may score bounding box i as follows:
S.sub.c(x.sub.i,y.sub.c,i,h.sub.i)=α.sub.p.sub.i.sup.Tx.sub.i.Math.y.sub.c,i+Σ.sub.jεL.sub.iβ.sub.p.sub.j.sup.Td.sub.ij.Math.h.sub.ij+Σ.sub.jεL.sub.iγ.sub.p.sub.i.sub.,p.sub.j.sup.Tx.sub.j.Math.h.sub.ij, (3)
where α.sub.p.sub.i.sup.T x.sub.i.Math.y.sub.c,i is a root model representing the confidence that the object belongs to an object-level category given subcategory detector output (e.g., the confidence that an “object” is a bicycle given the subcategory detector output), β.sub.p.sub.j.sup.T d.sub.ij.Math.h.sub.ij is a context model representing the confidence that the object belongs to the object-level category given the positions of other subcategories in the image (e.g., the confidence that an object is a “bicycle” given the position of the object relative to a “rider” subcategory label detected in the image), and γ.sub.p.sub.i.sub.,p.sub.j.sup.T x.sub.j.Math.h.sub.ij is a co-occurrence model representing the confidence that the object belongs to the object-level category given the co-occurrence of two objects (e.g., the confidence that an object is a bicycle given the co-occurrence of a the object with a “rider”) It would have been obvious for a person with ordinary skill in the art before the invention was effectively filed to have modified Takatsuka/Li in view of Sigal to have included the features of “the corresponding object of each relevant bounding box comprising a co-occurrence value” because joint appearance change of the person and the bicycle, resulting from their interaction, is ignored by traditional object-detection models (Sigal column 1 lines 30-35).
Takatsuka/Li/Sigal do not explicitly teach:
“wherein the co-occurrence value corresponds to a frequency of occurrence of the corresponding object within a corresponding [object region]”
Win paragraph [0031] teaches our system will learn that "eye pixels" tend to occur near "mouth" or "nose pixels". Therefore, shape filters enable us to model local co-occurrences of object parts within an object region. It would have been obvious for a person with ordinary skill in the art before the invention was effectively filed to have modified Takatsuka/Li/Sigal in view of Winn to have included the features of “wherein the co-occurrence value corresponds to a frequency of occurrence of the corresponding object within a corresponding [object region]” because there is a need to provide simple, accurate, fast and computationally inexpensive methods of object detection and recognition for many applications (Winn [0003]).
In regards to claim 2, Takatsuka/Li/Sigal/Winn teach all the limitations of claim 1 and further teach:
“wherein the identifying the relevant bounding boxes comprises: generating a plurality of bounding boxes associated with the image”
Takatsuka Figure 6, inter alia, teaches generating a plurality of bounding boxes associated with the image.
“and identifying the relevant bounding boxes from the plurality of bounding boxes based on respective areas of the plurality of bounding boxes and a respective classification value of each of the plurality of bounding boxes”
Takatsuka Figure 6 teaches classifying the images in the bounding box. Takatsuka paragraph [0151] teaches as the classified image adaptation processing, appropriate parameters (image quality adjustment values) are stored for each class of an object that can be targeted. Then, for the image captured by the array sensor 2, object detection and class identification of a detected object are performed, the parameters are selected according to the identified class and set in the logic unit 5, and the processing using the parameters is performed in the logic unit 5.
Takatsuka/Li/Sigal/Winn further teach:
“and wherein the classification value of each of the plurality of bounding boxes is indicative of a number of classes that a corresponding bounding box is categorized into”
Sigal column 2 lines 40-45 teaches identified subcategories may further be used to initialize mixture components in mixture models which the object detection application trains in a latent SVM framework. Such training learns, for each object category, a variable (as opposed to fixed) number of subcategory classifiers. It would have been obvious for a person with ordinary skill in the art before the invention was effectively filed to have modified Takatsuka/Li in view of Sigal to have included the features of “and wherein the classification value of each of the plurality of bounding boxes is indicative of a number of classes that a corresponding bounding box is categorized into” because joint appearance change of the person and the bicycle, resulting from their interaction, is ignored by traditional object-detection models (Sigal column 1 lines 30-35).
In regards to claim 11, Takatsuka/Li/Sigal/Winn teach all the limitations of claim 1 and further teach:
“wherein the storage is one of a memory of the electronic device and a cloud based storage”
Takatsuka paragraph [0138] teaches the parameter selection unit 84 stores the parameters for signal processing according to each class. From Figure 1 parameter selection unit 84 is part of the electronic device, e.g. sensor device 1.
In regards to claim 13, Takatsuka/Li/Sigal/Winn teach all the limitations of claim 1 and claim 13 contains similar limitations. Therefore, claim 13 is rejected for similar reasoning as applied to claim 1. Additionally, Takatsuka Figure 1 teaches the system implementation with a processor and memory.
In regards to claim 14, Takatsuka/Li/Sigal/Winn teach all the limitations of claim 13 and claim 14 contains similar limitations as in claim 2. Therefore, claim 14 is rejected for similar reasoning as applied to claim 2.
In regards to claim 20, Takatsuka/Li/Sigal/Winn teach all the limitations of claim 1 and claim 20 contains similar limitations written in article format. It would have been obvious to those of ordinary skill in the art to have practiced the invention as an article. Therefore, claim 20 is rejected for similar reasoning as applied to claim 1.
Claim(s) 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Takatsuka in view of Li in view of Sigal in view of Winn and further in view of Jelicz 2012/002094 hereinafter referred to as Jelicz.
In regards to claim 12, Takatsuka/Li/Sigal/Winn teach all the limitations of claim 1 and further teach:
“wherein the rendering the image of the scene to be obtained comprises rendering the image on a view finder 155 of the electronic device”
Jelicz Figures 2-3 teach generating a preview image.
“and wherein the method further comprises: controlling the electronic device to display the one or more reference images on a user interface of the electronic device”
Jelicz Figures 2B-D, inter alia, teach displaying a reference image 220.
“receiving a user input of a selection of a reference image of the one or more reference images”
Jelicz Figure 3 and paragraph [0061] teaches the image pickup apparatus 100 selects a reference image in operation S330. Specifically, the image pickup apparatus 100 may select, as the reference image, image data selected by a user input through a user operation unit 180.
“controlling the camera to adopt configurations similar to the reference configurations of the selected reference image”
Jelicz paragraph [0055] teaches the first area 240 is the subject area where the user wishes to check the precision of the overlapping. The first area 240 may be moved through the user operation unit 180.
“and controlling the camera to obtain, using the adopted configurations, the image being displayed on the view finder”
Jelicz teaches in the Abstract as the preview image is displayed together with the reference image when taking a photograph, a user is able to compare the preview image with the previously photographed image and take the photograph accordingly.
It would have been obvious for a person with ordinary skill in the art before the invention was effectively filed to have modified Takatsuka/Li/Sigal in view of Jelicz to have included the features of “wherein the rendering 402 the image of the scene to be obtained comprises rendering the image on a view finder 155 of the electronic device, and wherein the method further comprises: controlling the electronic device to display the one or more reference images on a user interface of the electronic device; receiving a user input of a selection of a reference image of the one or more reference images; controlling the camera to adopt configurations similar to the reference configurations of the selected reference image; and controlling the camera to obtain, using the adopted configurations, the image being displayed on the view finder” because if the user successfully opens the searched image, since the user cannot compare the searched image with the image to be photographed, the direction and background of the previously photographed picture and those of the picture to be taken may not exactly match, such that the user is unable to take a desired picture (Jelicz [0007]).
Response to Arguments
Applicant's arguments filed 7/9/2025 have been fully considered but they are not persuasive.
As indicated in the rejection above, Takatsuka paragraph [0138] teaches the parameter selection unit 84 stores the parameters for signal processing according to each class, and selects corresponding one or a plurality of parameters using the class or the bounding box of the detected object identified by the class identification unit 83, for example. Then, the parameter selection unit 84 sets the one or the plurality of parameters in the logic unit 5. Takatsuka paragraph [0163] teaches in a case of generating the parameter set of the class “person”, as illustrated in FIG. 4A, deep learning is performed using a large number of human images as training data SD, and the parameter set PR1 with the highest image recognition rate in the viewpoint of recognition of a person is generated. Takatsuka paragraph [0165] and Figure 4B teach the parameter sets PR1, PR2, PR3, and the like corresponding to the generated classes are stored so that the parameter selection unit 84 can select them. The Examiner interprets that the reference images are the training images for example, human images. Based on this it is interpreted that the retrieved corresponding reference configurations, e.g. PR1, correspond to the identified one or more refences images, e.g. human images.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL E TEITELBAUM, Ph.D. whose telephone number is (571)270-5996. The examiner can normally be reached 8:30AM-5:00PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John Miller can be reached at 571-272-7353. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MICHAEL E TEITELBAUM, Ph.D./Primary Examiner, Art Unit 2422