Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 2, 5, 6, 13, 15-17, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gokturk et al (US 7,519,200 B2) in view of Tran et al (US 2016/0189009 A1) and in further view of Candalore (US 2015/0104103 A1).
As to claim 1, Gokturk discloses: A mobile device comprising: a memory comprising a content database and a local storage; an image sensor coupled to the memory and configured to capture image data for a first image (Gokturk, col. 5, lines 31-35, and col. 28, lines 57-62); and one or more processors coupled to the memory and the image sensor, and configured to performing operations comprising: (In Gokturk, as discussed in col. 3, lines 16-18, a captured image is analyzed to “recognize information from image data contained in the captured image”; further, as discussed in col. 3, lines 16-24, a captured image is analyzed to “recognize information from image data contained in the captured image” and face, clothing, apparel, and combinations of characteristics may be utilized to automatically process and analyze the image content of the captured image; and further, as discussed in col. 40, line 40, “stored data that corresponds to an image is supplemented with metadata that identifies one or more objects in the captured image that have been previously recognized”; and further, as shown in Figure 18, the recognition information in an image file such as a face, a landmark, or text is stored in the header of the file ID which is stored in the metadata Data Store 1970; and further, as shown in Fig. 19, metadata 1930 and recognition information 1940 is stored in a data store 1970).
However, Gokturk fails to disclose the limitations of “accessing the image data for the first image; processing, by the one or more processors, the image data using a filter convolved across a width and a height of the first image in a neural network to determine one or more characteristics of the image data selected from a set of tags and based at least in part on image content of the first image” and of “processing the one or more characteristics to assign a privacy status indicator for the image data; and responsive to identifying the privacy status indicator to be private, encrypting, by the one or more processors, the one or more characteristics of the image data; and storing, by the one or more processors, the one or more encrypted characteristics of the image data in the local storage.”
Tran, in the same field of endeavor of image content analysis systems, teaches accessing the image data for the first image; processing, by the one or more processors, the image data using a filter convolved across a width and a height of the first image in a neural network to determine one or more characteristics of the image data selected from a set of tags and based at least in part on image content of the first image. (Tran, [0004, 0046, 0049]). It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to modify Gokturk with the image analysis taught by Tran in order to solve the problem of conventional approaches for recognizing objects within media content that can be inefficient, inaccurate, and limited in capability (Tran, [0003]).
Candalore teaches image capture as part of a surveillance system whereby images of the face of a person are derived through facial recognition and a determination is made as to whether to obfuscate (e.g., pixelate or mask) the images of the face based on one or more criteria, including based on the individual being a youth. Original images are stored locally on camera 10, encrypted for privacy, [0019–0021, 0023]. Candalore therefore teaches processing the one or more characteristics to assign a privacy status indicator for the image data; and responsive to identifying the privacy status indicator to be private (indicative of the image being that of a youth as discussed above), encrypting, by the one or more processors, the one or more characteristics of the image data; and storing, by the one or more processors, the one or more encrypted characteristics of the image data in the local storage. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to modify the combination of Gokturk and Tran with the teachings of Candalore in order to protect the privacy of youth by encrypting access to their images.
As to claim 2, the combination discloses: The mobile device of claim 1, further comprising: a display, the one or more processors being coupled to the display; and wherein the one or more processors are further configured to perform operations comprising: presenting, by the display, a gallery view user interface including a plurality of images from the content database, each image in the plurality of images having a non-private privacy status indicator. (See Gokturk: For example, a user's search criteria of a proper name will return images that have been recognized to containing the person with the same name. Those images can come from the user's own collection or image library, col. 28, lines 21-23, col. 29, lines 11-12. Based on the teachings of Candalore as discussed above, it follows that a search of the proper name of a person who is not a youth would return images having a non-private status.)
As to claim 5, Gokturk discloses: The mobile device of claim 2, further comprising: an input device, the one or more processors being coupled to the input device; and wherein the one or more processors are further configured to perform operations comprising: receiving, by the input device, a password; responsive receiving the password, updating the gallery view user interface to further include the first image. (see Gokturk, col. 48, lines 55-61)
As to claim 6, Gokturk discloses access information such as a login and password but fails to explicitly disclose wherein the password being one of: a four-digit personal identification number (PIN) or a sixteen-character passphrase. However, this is not considered to be a patentable distinction. It was notoriously well-known in the art prior to the effective filing date of the invention to use these types of passwords in order to enable security through passwords that could be readily recalled by the user. Therefore, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to modify the combination with these teachings for the stated advantage.
As to claim 13, Tran teaches as part of the convolutional operation: The mobile device of claim 1, wherein the one or more characteristics of the image data comprises a list of objects identified by machine vision processing of the first image as part of the processing of the image data. (see Tran, Figure 3, object descriptors 360)
Claim 15 is met as discussed above for claim 1.
Claim 16 is met as discussed above for claim 2.
Claim 17 is met as discussed above for claim 5.
Claim 20 is met as discussed above for claim 1. Also, see Gokturk, col. 5, lines 40-60.
Claim(s) 3, 4, 7-9, 14, and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gokturk et al (US 7,519,200 B2) in view of Tran et al (US 2016/0189009 A1), in further view of Candalore (US 2015/0104103 A1), and in further view of Garcia-Barrio (US 8,745,058 B1).
As to claim 3, the combination of Gokturk, Tran, and Candalore fails to disclose: The mobile device of claim 2, wherein the plurality of images being presented are based on a set of default search content characteristics from the content database. However, Garcia-Barrio discloses automatically presenting the gallery view with a first plurality of images from the gallery storage comprising the first image, wherein the plurality of images are ordered within the gallery view based on a set of default search content characteristics from the content database (Upon a user query of "Chicago", the Garcia-Barrio reference automatically presents a gallery of images on the head-mountable device of the user, wherein the plurality of images are ordered alphabetically or chronologically based on a set of default location search content characteristics, where the default search content characteristics are the city locations represented in the database as shown in Fig. 4c and 4d where the user, using for example head motions, is scrolling down from the Chicago images to the New York images, to the San Francisco images, and the Seattle images. These images comprise images taken by the head mountable device in the course of the daily life of the user and will therefore come from a library storing a captured first image., col. 10, lines 45; col. 11, lines 33-50; col. 12, lines 3-11). It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to modify the combination to include the teachings of Garcia-Barrio in order to allow for ease of searching of a user's images.
As to claim 4, Garcia-Barrio discloses: The mobile device of claim 3, wherein the set of default search content characteristics are periodically updated by a communication from a messaging server system. (see Garcia-Barrio, col. 8, lines 33-47) It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to modify the combination to include these search characteristics in order to facilitate efficient and intuitive methods for searching and navigating through stored information (Garcia-Barrio, col. 1, lines 59-62).
As to claim 7, Garcia-Barrio teaches: The mobile device of claim 5, wherein the one or more processors are further configured to perform operations comprising: processing a user search input after presentation of the updated gallery view user interface; and in response to the user search input, generating a plurality of sets of suggested results. (Garcia-Barrio, col. 13, lines 50-65). It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to modify the combination to include these search characteristics in order to facilitate efficient and intuitive methods for searching and navigating through stored information (Garcia-Barrio, col. 1, lines 59-62).
As to claim 8, the combination discloses: The mobile device of claim 7, wherein the plurality of sets of suggested search results is based, at least in part, on a set of all characteristics within the content database (see Garcia-Barrio, col. 2, lines 5-28) and a set of all encrypted characteristics within the local storage (see Candalore, as discussed above, which teaches the local storage of images of youth, encrypted for privacy). The combination of Gokturk, Tran, and Candalore, as modified by Garcia-Barrio would yield a system that derives search results based on content with both private and non-private attributes.
As to claim 9, Garcia-Barrio teaches: The mobile device of claim 2, wherein the user interface comprises a plurality of headers, each header associated with a different set of search criteria and one or more content display areas, the plurality of headers comprising: a first header associated with a time period search criteria; a second header associated with a location search criteria; and a third header associated with an object based search criteria. (Garcia-Barrio, Figures 4C and 4D, and col. 5, lines 50-55).
As to claim 14, Garcia-Barrio teaches: The mobile device of claim 1, wherein the one or more characteristics of the image data comprises a set of objects, an image capture location, and an image capture time. (Garcia-Barrio, col. 5, lines 50-55).
As to claim 18, Garcia-Barrio teaches: The method of claim 17, further comprising: processing a user search input after presentation of the updated gallery view user interface; and in response to the user search input, generating a plurality of sets of suggested results. (Garcia-Barrio, col. 13, lines 50-65). It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to modify the combination to include these search characteristics in order to facilitate efficient and intuitive methods for searching and navigating through stored information (Garcia-Barrio, col. 1, lines 59-62).
Claim(s) 10-12 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gokturk et al (US 7,519,200 B2) in view of Tran et al (US 2016/0189009 A1), in further view of Candalore (US 2015/0104103 A1), in further view of Garcia-Barrio (US 8,745,058 B1), and in further view of Levoy et al (US 9,195,880 B1).
As to claim 10, the combination of Gokturk, Tran, Candalore, and Garcia-Barrio discloses: The mobile device of claim 9, wherein the one or more processors are further configured perform operations comprising: initiating display, in each content display area of the user interface, of search result content associated with each corresponding header for the content display area (Garcia-Barrio, Fig. 4C and 4D), but fails to disclose: and initiating display of a crossfade animation between individual search result content elements for each content display area as part of display of each image or video element for each set of search result content associated with each header. Levoy teaches this feature (Levoy, col. 21, line 34, to col. 22, line 43). It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to modify the combination to include the crossfade animation in order to facilitate a viewer switching between images in an image stack.
As to claim 11, The combination discloses: The mobile device of claim 10, wherein the search result content comprises one or more images, one or more video clips displayed within the corresponding display area, and one or more content collections. (see Garcia-Barrio, col. 14, lines 43)
Claim(s) 12 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gokturk et al (US 7,519,200 B2) in view of Tran et al (US 2016/0189009 A1), in further view of Candalore (US 2015/0104103 A1), and in further view of Levoy et al (US 9,195,880 B1).
As to claim 12, Levoy teaches: The mobile device of claim 1, wherein the one or more processors are further configured to perform operations comprising: receiving, at the mobile device, a first user input selecting the first image for inclusion in a content collection comprising a plurality of content elements; and publishing the content collection via an ephemeral messaging server system, (see Levoy, ‘Image Stack Viewer as Part of a Social Network’, col. 19, lines 51+, and in particular col. 20, lines 10-20) wherein the first image is presented within the content collection based on a time of generation of the first image compared with a time of publication associated with the publication of the content collection. (see Levoy, col. 10, lines 58+, which discloses that an image stack might be used to generate a time-lapse, in which images in the stack are arranged based on time of generation). It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to modify the combination to include the noted teachings in order to facilitate the sharing of images among a social network.
Claim 19 is met as discussed above for claim 12.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to John W. Miller whose telephone number is (571) 272-7353. The examiner can normally be reached Monday - Friday 7:30 AM - 4:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Deborah Reynolds can be reached at (571) 272-0734. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JOHN W MILLER/Supervisory Patent Examiner, Art Unit 2422