DETAILED ACTION
This office action is responsive to communication(s) filed on 11/26/2025.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims Status
Claims 1-3, 6-14, and 17-20 are pending and are currently being examined.
Claims 1, 12 and 18 are independent.
Claims 4-5 and 15-16 are newly canceled.
Claims 1, 6-9, 11-13, and 17-19 are newly amended.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-3, 6-8, 12-14, and 17-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nakamura; Takatoshi et al. (hereinafter Nakamura – US 20080175103 A1) in view of Yamaji; Kei et al. (hereinafter Yamaji – US 20140281965 A1) and Bourdev; Lubomir D. et al. (hereinafter Bourdev – US 20130121584 A1).
Independent Claim 1:
Nakamura teaches:
An electronic device, comprising: (see fig. 1)
a display; (fig 1:129 and ¶ 192)
a memory configured to store one or more instructions; (ROM 102 storing programs, ¶ 44 and fig. 1)
and a processor configured to: (CPU 101 for executing the programs, ¶ 44 and fig. 1)
control the display, in a date area of a calendar user interface (UI), to display a first image having time information corresponding to the date area, among a plurality of images, (an image having a matching a date [time information] on the calendar is displayed on the calendar within a space representing that date [corresponding to the date area], while other multiple other images are displayed on different date spaces [among a plurality of images], ¶ 98 and fig. 5. Two dates with the same month and day in different years are both considered to hold corresponding time information because they share a specific, identifiable position within the annual, recurring time period structure of a calendar system.)
and based on the first image being selected among the plurality of images, identify a context included in the first image, search for a second image that is different from the first image and corresponds to the identified context, (when the user specifies [first image being selected] an image, e.g., by point it with a cursor, the device, e.g., using control part 100, searches different related images [search for a second image that is different from the first image] pertaining to the date selected [identify a context included in the first image… search for a second image [that]…corresponds to the identified context], ¶ 142-143 and figs. 8 and 9)
and control the calendar UI to display the second image together with the first image on the calendar UI, (the device displays the images together with the first image, e.g., in a chronological order, ¶ 143 and figs 8 and 9)
[…],
and wherein based on two or more images having the time information corresponding to the date area being selected among the plurality of images, (the device displays two or images together that have a matching a date [having the time information corresponding to the date area], albeit in different years, e.g., in a chronological order, Nakamura ¶ 143 and figs 8 and 9)
Nakamura does not appear to expressly teach, but Yamaji teaches:
wherein the processor is further configured to select, as the second image, one or more images from among the plurality of images to which a user preference input value is provided by a user, (a user preference information is used as an input [user preference input value] for automatic selection of photos, wherein a user can choose to adopt the automatic selection or “can further edit the layout” [user preference input value], therefore whether or not the automated selection of photos and layout are adopted, the selection/layout are based on at least one [user preference input value], Yamaji, ¶ 131 and fig. 1)
the processor is further configured to select one of the two or more images as the first image, based on the user preference input value. (a user preference information is used as an input [user preference input value] for automatic selection of photos, wherein a user can choose to adopt the automatic selection or “can further edit the layout” [user preference input value], therefore whether or not the automated selection of photos and layout are adopted, the selection/layout are based on at least one [user preference input value], Yamaji, ¶ 131 and fig. 1. As claimed, the instant invention doesn’t preclude the selection in the phrase “based on the first image being selected among the plurality of images” from being a selection performed by a user, or limit this selection as being the same selecting step as the one in the phrase “select one of the two or more images as the first image”) Accordingly, it would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to modify the device of Nakamura to include wherein the processor is further configured to select, as the second image, one or more images from among the plurality of images to which a user preference input value is provided by a user, the processor is further configured to select one of the two or more images as the first image, based on the user preference input value, as taught by Yamaji.
One would have been motivated to make such a combination in order to easily select desired images from a large number of images, Yamaji ¶ 38. It was well within the capabilities of a person having ordinary skill in the art to have realized that applying Yamaji’s automatic, preference-based layout to both initial [select one of the two or more images as the first image] and subsequent image displays [display the second image together with the first image] is a straightforward implementation, yielding the same effect of easy of selection of desired images from a large number of images.
Nakamura further teaches that event information can be used as identification information to identify and extract content, and that this information includes things such as, locations, people’s names, and color, ¶ 194.
Nakamura-Yamaji does not appear to expressly teach, but Bourdev teaches:
wherein the context includes information on object types, colors, and materials in the first image (contextual features that can be extracted from images include color [colors], type of clothing [object types], texture of materials [materials], ¶ 91. Identifying the texture of a material acts as a representation of material identification because specific surface qualities—such as rough, smooth, grainy, or porous—are physically characteristic of particular materials (e.g., wood is rough, glass is smooth).
Accordingly, it would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to further modify the device of Nakamura to include wherein the context includes information on object types, colors, and materials in the first image, as taught by Bourdev.
One would have been motivated to make such a combination in order to enrich the identification capabilities of the device by providing additional contextual cues to improve the accuracy, Bourdev ¶¶ 6 and 24, of identifying people in the photos, Nakamura ¶ 194.
Claim 2:
The rejection of claim 1 is incorporated. Nakamura further teaches:
wherein the processor is further configured to search for the second image having the identified context within a preset date range based on a date corresponding to the first image. (the chronological display of the images is, e.g., of images “taken on the same month and day in the past” [within a preset date range based on a date corresponding to the first image], ¶ 147)
Claim 3:
The rejection of claim 1 is incorporated. Nakamura further teaches:
wherein the processor is further configured to identify the context corresponding to a user schedule, among a plurality of contexts included in the first image, and search for the second image having the identified context. (the images are retrieved based on an events, e.g., past events, recorded in the user’s calendar, e.g., a first image of a tree was captured [event on user’s schedule] when the tree was planted in a garden on Oct. 18, 2003, and the other images of the tree were captured [event on user’s schedule] on the same month and day every year, ¶ 137)
Claim 6:
The rejection of claim 1 is incorporated. Nakamura-Yamaji further teaches:
wherein the processor is further configured to control the display to display the first image as a thumbnail image of the first image in the date area of the calendar UI. (thumbnail images, Nakamura ¶ 137 and Nakamura fig. 8)
Claim 7:
The rejection of claim 6 is incorporated. Nakamura-Yamaji further teaches:
wherein: based on the thumbnail image being selected from the calendar UI, the processor is further configured to control the display to display a pop-up window having at least one of a first area displaying the first image, (Nakamura fig. 8 reflects a first image at the first area located at the bottom of the pop-up window overlaid on the calendar)
a second area displaying the context included in the first image, (Nakamura fig. 8 reflects a second area above the first area shows the date [context] of the first image)
a third area displaying the second image, (Nakamura fig. 8 reflects at least six other areas in the other areas in the pop-up window, including a third area and fourth area located above, for displaying other and/or remaining images. For purposes of compact prosecution only, the examiner interprets the limitation(s) as being directed to a search for or at least a second image, and that second image includes an image other the first image.)
and a fourth area displaying remaining images other than the first image from among the plurality of images. (Nakamura fig. 8 reflects at least six other areas in the other areas in the pop-up window, including a third area and fourth area located above, for displaying other and/or remaining images. For purposes of compact prosecution only, the examiner interprets the limitation(s) as being directed to “remaining images” that include “other images” besides the first and second images.)
Claim 8:
The rejection of claim 7 is incorporated. Nakamura-Yamaji further teaches:
wherein the processor is further configured to determine an arrangement position of each of the second image in the third area based on user preference set for each of second image. (Yamaji teaches creating a layout based on selected photos based on user preference, ¶¶ 130-131, e.g., placing preferred persons at the center, ¶ 133. See Nakamura fig 8 for “third area”, as explained for claim 7)
Independent Claims 12 and 18:
Claim(s) 12 and 18 are directed to a method and recording medium for accomplishing the functions of the device in claim 1, and are rejected using similar rationale(s).
Claims 13 and 19:
The rejection of claims 12 and 18 are incorporated. Claim(s) 13 and 19 are directed to a method and recording medium for accomplishing the functions of the device in claim 2, and are rejected using similar rationale(s).
Claims 14 and 20:
The rejection of claims 12 and 18 are incorporated. Claim(s) 14 and 20 are directed to a method and recording medium for accomplishing the functions of the device in claim 3, and are rejected using similar rationale(s).
Claims 17:
The rejection of claim 12 is incorporated. Claim(s) 17 is directed to a method for accomplishing the functions of the device in claims 6, and is rejected using similar rationale(s).
Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nakamura (US 20080175103 A1) in view of Yamaji (US 20140281965 A1) and Bourdev (US 20130121584 A1), as applied to claim 1 above, and further in view of Zambetti; Nicholas et al. (hereinafter Zambetti – US 20150067513 A1).
Claim 9:
The rejection of claim 1 is incorporated. Nakamura-Yamaji further teaches:
further comprising a display configured to receive [an] input, (Yamaji teaches that user inputs are received for editing a layout of images, ¶¶ 120-121)
wherein the user preference input value is set based on […] of the […] input on each of the plurality of images. (the inputs are received on images to be edited, ¶ 80, and a preference is updated/determined based on the inputs, Yamaji ¶¶ 83-84 )
Nakamura-Yamaji does not appear to expressly teach, but Zambetti teaches:
that the input is “touch” input (touch sensitive surfaces are a common physical architecture for user-interfaces, ¶ 70)
and the setting is “a time duration” of the touch (that an operation can be performed based on a tap and hold [a time duration] touch input, ¶ 345)
Accordingly, it would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to further modify the device of Nakamura to include further comprising a display configured to receive a touch input, wherein the user preference is set based on a time duration of the touch input on each of the plurality of images, as taught by Zambetti.
One would have been motivated to make such a combination in order to improve the practicality of the device by providing known, common user-interface architecture and related interactions, Zambetti ¶ 70.
Claim(s) 10-11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nakamura (US 20080175103 A1) in view of Yamaji (US 20140281965 A1) and Bourdev (US 20130121584 A1), as applied to claim 1 above, and further in view of Novikoff; Timothy et al. (hereinafter Novikoff – US 20180068019 A1).
Claim 10:
The rejection of claim 1 is incorporated. Nakamura does not appear to expressly teach, but Novikoff teaches:
wherein the processor is further configured to: identify an object included in the first image, and identify a context of the object as the context of the first image (a system that is capable of understanding the semantic theme of images, based on identifying one or more objects within the images, e.g., “snow” or other winter object [context of the object] is used to understand that the image depicts a “winter” theme [context of the first image], ¶ 138).
Accordingly, it would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to modify the device of to include wherein the processor is further configured to: identify an object included in the first image, and identify a context of the object as the context of the first image, as taught by Novikoff.
One would have been motivated to make such a combination in order improve the device’s capabilities and usability by affording automatic inference of keywords/concepts that are adding to image metadata for image identification/retrieval purposes, e.g., during image searches, Novikoff ¶¶ 59 and 138-139.
Claim 11:
The rejection of claim 1 is incorporated Nakamura does not appear to expressly teach, but Novikoff teaches:
wherein the processor is further configured to: identify a plurality of objects included in the first image, select an object from among the plurality of objects based on user preference, identify the context of a first object as the context of the first image. (a system that is capable of understanding the semantic theme of images, based on identifying one or more objects within the images, e.g., “snow” or other winter object [context of the object] found within one or more images, is used to understand that the image(s) depicts a “winter” theme [context of the first image], ¶ 138. The identification of the object is done based on user consent [based on user preference], ¶¶ 66 and 138).
Accordingly, it would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to modify the device of to include wherein the processor is further configured to: identify an object included in the first image, and identify a context of a first object as the context of the first image, as taught by Novikoff.
One would have been motivated to make such a combination in order improve the device’s capabilities and usability by affording automatic inference of keywords/concepts that are adding to image metadata for image identification/retrieval purposes, e.g., during image searches, Novikoff ¶¶ 59 and 138-139.
Response to Arguments
112(b) rejection for claim 7-8 have been overcome by claim amendment(s).
Applicant's 102 arguments have been fully considered and are persuasive for the 102 rejection, however, the 102 rejection has been replaced with a 103 rejection, due to claim amendment.
Applicant's 103 arguments have been fully considered, but they are not persuasive, or are moot in view of new grounds of rejection above.
First, the applicant alleges that Nakamura doesn’t teach the new context of limitation(s) added for claim 1.
This argument is persuasive for the 102 rejection; however, the 102 rejection has been replaced with a 103 rejection, due to claim amendment.
Second, the applicant alleges that Yamaji doesn’t teach selecting images using user preferences, but instead only automatically selects images, Remarks Page(s) 10-11.
The examiner respectfully disagrees because although Yamaji ¶ 131 reflects an automatic selection that is based on user preferences, and then a user can choose to adopt to the automatic selection or “can further edit the layout”, therefore further instructing the device to select the images based on user preferences.
Third, the applicant alleges that the combination of references fail to teach “wherein based on two or more images having the time information corresponding to the date area being selected among the plurality of images, the processor is further configured to select one of the two or more images as the first image, based on the user preference input value”, as claimed, because Nakamura only discloses an operation of decompressing images and generating chronological display section, per ¶ 143, and “selecting a first image as a representative image based on at least two images having time information corresponding to a date”, and therefore supposedly doesn’t not disclose the “claimed features of allowing a user to efficiently generate a calendar UI with images without additional input”, Remarks Page 11.
The examiner respectfully disagrees because:
Although the claim language doesn’t require some of the terminology/limitations used by the applicant, e.g., a “representative image”, Nakumura clearly discloses displaying a first and second image in a same area based on those images have the same date. Specifically, as explained in the 103 rejection section above, an image having a matching a date [time information] on the calendar is displayed on the calendar within a space representing that date [corresponding to the date area], while other multiple other images are displayed on different date spaces [among a plurality of images], ¶ 98 and fig. 5, and when the user specifies [first image being selected] an image, e.g., by point it with a cursor, the device, e.g., using control part 100, searches different related images [search for a second image that is different from the first image] pertaining to the date selected [identify a context included in the first image… search for a second image [that]…corresponds to the identified context], ¶ 142-143 and figs. 8 and 9, and the device displays the images together with the first image, e.g., in a chronological order, ¶ 143 and figs 8 and 9. Two dates with the same month and day in different years are both considered to hold corresponding time information because they share a specific, identifiable position within the annual, recurring structure of a calendar system.
The claim doesn’t require “allowing a user to efficiently generate a calendar UI with images without additional input”. Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993).
Furthermore, the applicant attacks Nakamura for not teaching the user preference concepts, but it is the Nakamura-Yamaji combination that teaches these concepts, as explained in the 103 rejection above. One cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986).
Fourth, the applicant relies on the arguments above to allege patentability of the remaining claims. Remarks Pages 11-12.
The examiner respectfully disagrees for the reason(s) above.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Below is a list of these references, including why they are pertinent:
Kim, Tae-Young et al. US 20020003531 A1, is pertinent to claim 1 for disclosing a method for displaying a calendar and allowing image data to specific date regions, ¶ 27 and figs. 8-10.
Ko; Hyeon Mok et al. US 11853108 B2, is pertinent to claim 1 for disclosing a method for searching for a second image related to a first image, Abstract and Ko Claim 1.
Hirakawa; Daisuke et al. US 10497079 B2, is pertinent to claim 1 for disclosing a device which displays photos on a calendar, col 4:10-27 and fig 8.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GABRIEL S MERCADO whose telephone number is (408)918-7537. The examiner can normally be reached Mon-Fri 8am-5pm (Eastern Time).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kieu Vu can be reached at (571) 272-4057. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Gabriel Mercado/ Primary Examiner, Art Unit 2171