Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 10/29/2025 has been entered.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-4, 8, 9, 16-18, and 20-22 are rejected under 35 U.S.C. 103 as being unpatentable over Akiyama (US Patent Pub. # 2010/0103286) in view of Kang (US Patent Pub. # 2016/0019412) and further in view of Nolte (US Patent Pub. # 2020/0387533).
As to claim 1, Akiyama An information processing apparatus comprising:
a processor (CPU (central processing unit)) (Para 182); and
a memory (memory devices (memory media) connected to or built in the processor (Para 182),
wherein, in a case in which imaging accompanied by a focus operation (AF control section 29) in which a specific subject (object identified by the judgment result) is used as a focus target region is performed by an image sensor (image pick-up element 52), the processor (CPU Fig. 1) outputs specific subject data (selects one object corresponding to a learning image having a highest priority level) related to a specific subject image indicating the specific subject (selects one object corresponding to a learning image having a highest priority level) in a captured image obtained by the imaging as training data used in machine learning (learning processing section 18) (Para 111, 125-127).
Akiyama does not teach coordinates of the focused target region in the captured image. Kang teaches coordinates (coordinates) of the focused target region (ROI1 and ROI2) in the captured image (frame buffer 802) (Para 36 and 37). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have provided a coordinates as taught by Kang to the device of Akiyama, to provide the user with more video talk functions, improve the quality of video talk, achieve a better experience of use, and increase the market competitiveness for products (Para 43 of Kang).
Akiyama teaches a plurality of faces are detected, the image data of each detected face and information of a feature vector calculated from the feature of each detected face are transmitted to the learning processing section 18 (i.e., learning is carried out) (Para 142) and if an animal other than a human being is detected in the picked-up image (i.e., an image of an animal is detected) as the result of the cutting out of the arm, the leg, the body shape, and/or the body color (Yes in step S46), the procedure proceeds to step S47. If an image of an animal is not detected (No in step S46), the procedure proceeds to step S48. In step S47, the image data of the detected animal and information of a feature vector calculated from the feature are transmitted to the learning processing section 18 as data for use in learning Para 143). Akiyama in view of Kang do not teach the processor displays a plurality of label candidates on a screen for selection and receives a selected label among the plurality of label candidates, wherein each of the plurality of label candidates is information related to a subject image and the selected label which is information related to the specific subject image. Nolte (Figs. 4-7) teaches the processor (microprocessor 81) displays a plurality of label (three exemplary keywording trees 130a, 130b, and 130c) candidates on a screen for selection and receives a selected label among the plurality of label candidates, wherein each of the plurality of label candidates (130a, 130b, and 130c) is information related to a subject image (deer grass or region) and the selected label (roe deer or grass) which is information related to the specific subject image (deer grass or region) (Para 41 and 63-67). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have provided a digital media content classification system as taught by Nolte to the device of Akiyama in view of Kang, to overcome such shortcomings of conventional systems, and to overcome limitations brought about by the unstructured archiving of digital media content (Para 7 of Nolte).
As to claim 2, Akiyama (Figs 8 and 9) teaches wherein the machine learning (18) is supervised machine learning, and the processor (CPU Fig. 1) gives a label, which is information related (object's personal information registered in the phone book data) to the specific subject image (object whose face image data), to the specific subject data (object whose face image data), and outputs the specific subject data (object whose face image data) as training data used in the supervised machine learning (Para 129-131 and 149). Akiyama teaches the user in step S21, if the key input section 11 accepts an instruction, entered by a user, that requires storage of the picked-up image (Yes in step S21), the procedure proceeds to step S22 (Para 129). In step S23, the object learning data extracting section 17 and the learning processing section 18 carry out the learning process (Para 130).
As to claim 3, Akiyama teaches wherein the processor (CPU Fig. 1) displays the focus target region (focused object) in an aspect that is distinguishable from other image regions in a state in which a video (video camera) for display (display) based on a signal output from the image sensor (52) is displayed on a monitor (display), and the specific subject image is an image corresponding to a position of the focus target region (focused object) in the captured image (Para 133 and 146).
As to claim 4, Akiyama teaches wherein the processor (CPU Fig. 1) displays the focus target region (focused object) in the aspect that is distinguishable from the other image regions by displaying a frame (focused regions may be surrounded by respective frames of different colors, which differ depending on their propriety levels) that surrounds the focus target region (focused object) in the video for display (display) (Para 133).
As to claim 8, Akiyama (Fig. 6, 9, and 11) teaches wherein the processor (CPU Fig. 1) displays a video (video camera) for display (display) based on a signal output from the image sensor (1) on a monitor, receives designation of the focus target region in the video for display (display), and extracts the specific subject image (focused object) based on a region of which a similarity evaluation value indicating a degree of similarity to the focus target region (focused object) is within a first predetermined range in a predetermined region (focused object) including the focus target region (Para 130, 133, and 146).
As to claim 9, Akiyama teaches wherein the processor (CPU Fig. 1) displays the focus target region (focused object) in an aspect that is distinguishable (focused regions may be surrounded by respective frames of different colors, which differ depending on their propriety levels) from other image regions (Para 133 and 134).
As to claim 16, Akiyama teaches wherein the processor (CPU Fig. 1) stores the data (picked-up image) in the memory (another memory), and performs the machine learning (object learning data extracting section 17 and the learning processing section 18) using the data stored in the memory (another memory) (Para 130).
As to claim 17, Akiyama teaches a learning device (CPU Fig. 1) comprising: a reception device (picked-up image data storing section 16) that receives the data output from the information processing apparatus (CPU Fig. 1) according to claim 1 (See the rejection above in regards to claim 1); and an operation device (object learning data extracting section 17 and the learning processing section 18) that performs the machine learning (17 and 18) using the data received by the reception device (16) (Para 111, 125-127, and 130).
As to claim 18, Akiyama teaches an imaging apparatus comprising: the information processing apparatus according to claim 1 (See the rejection above in regards to claim 1); and the image sensor (image pick-up element 52) (Para 94).
As to claims 20 and 21, these claims differ from claim 1 only in that the claim 1 is an information processing apparatus claim whereas claims 20 and 21 are a control method claim and a non-transitory computer-readable storage medium storing a program executable by a computer claim. Thus claim 20 and 21 are analyzed as previously discussed with respect to claim 1 above.
As to claim 22, Kang teaches wherein the coordinates (coordinates) of the focused target region (ROI1 and ROI2) are position coordinates of at least two corners (two corners in the diagonal of the rectangular ROI1 and ROI2) of the focused target region (ROI1 and ROI2) in the captured image (802) (Para 36).
Claims 5, 6, and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Akiyama (US Patent Pub. # 2010/0103286) in view of Kang (US Patent Pub. # 2016/0019412) further in view of Nolte (US Patent Pub. # 2020/0387533) and further in view of Li (US Patent Pub. # 2020/0112685).
As to claim 5, note the discussion above in regards to claims 1, 3 and 4. Akiyama in view of Kang further in view of Nolte do not wherein a position of the frame is changeable in accordance with a given position change instruction. Li teaches wherein a position of the frame is changeable in accordance with a given position change instruction (user manually selects a region as the target ROI in the preview image) (Para 33). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have provided a user manually selects a region as taught by Li to the device of Akiyama in view of Kang further in view of Nolte, to realize displaying a focus region to a user in a photographed image and thereby solve the problem in the prior art that it is unable to determine whether a reason of an out-of-focus or blurred image is caused by inappropriate setting of the focus region (Para 7 of Li).
As to claim 6, Li teaches wherein a size of the frame is changeable in accordance with a given size change instruction (user manually selects a region as the target ROI in the preview image) (Para 33).
As to claim 14, Li (Fig. 5) teaches wherein the specific subject data (target ROI) includes coordinates (coordinates) of the specific subject image (target ROI), and the processor (determining unit 132) outputs the captured image and the coordinates of the specific subject image (target ROI) (Para 77 and 99). Akiyama teaches the machine learning (See the rejection above in regards to claim 1.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTOPHER K PETERSON whose telephone number is (571)270-1704. The examiner can normally be reached Monday-Friday 7AM-4PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sinh N Tran can be reached at 571-2727564. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CHRISTOPHER K PETERSON/Primary Examiner, Art Unit 2637 12/7/2025