DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In communications filed on 09/15/2025. Claims 1, 11, and 17-20 are amended. Claims 1-20 are pending in this examination.
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. This examination is in response to US Patent Application No. 17/652,326.
Examiner Note
Applicant’s amendment to paragraph 22 of the specification obviates previously raised trademark objection in the specification.
Claim Objection
Applicant amended the independent claim 17, and dependent claims 18-20 with new amendments which does not appear in other independent claims set of 1, and 11, for the purpose of compact prosecution examiner suggest applicant to amend the same amendment to other independent claims to all have similar limitations.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION. —The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1– 20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention.
The claims 1, and 11 recites “comprising a first set of content items, a token comprising metadata tags of one or more faces from a facial recognition analysis”, and “in the second set of content items according to the facial recognition analysis of the first set of content items”, which renders the claim indefinite because, the claim does not indicate, how the submitter/client device does the facial analysis according to the first set of content, since the claim does not indicate that they are using the same kind of facial recognition software? If they are using different one, then how the submitter/client device facial analysis software knows what rules and configuration to use to analysis the second set of the content at the submitter side or one or more content items at client device side?
Claim 17 recite” receive, from a client device of a collector in connection with a content item collection request identifying a collection folder, a link associated with trained facial recognition software and metadata tags of one or more faces from a facial recognition analysis of a set of content items on the client device of the collector”, which renders the claim indefinite because, it is unclear if the client device is a collector or submitter and /or the collection folder is a collector? The claims recite “the client device of the submitter” Furthermore, the claim recite that the link is associated with …. How the link associated with them?
Claim 17 recite” run token button” which renders the claim indefinite because, the claim does not define a token and does not indicate what/which token to run!
Claim 17 recite “a client device of collector”, and “a client device of submitter”, which renders the claim indefinite because, it is unclear if the client device is a submitter or a collector?
Examiner maps the limitation under broadest reasonable interpretation.
Claims 2-10, and 12-16, and 18-20 do not cure the deficiency of claims 1, 11, and 17 and are rejected under 35 USC 112, 2nd paragraph, for their dependency upon claims 1, 11, and 17.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL. —The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement.
The claims 1-2, 11, and 15-16 recite “first set of content items”, and the claims 1-3,10-11, and 19 recite “a second set of content items” which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for pre-AIA the inventor(s), at the time the application was filed, had possession of the claimed invention.
Claims 2-10, and 12-16 do not cure the deficiency of claims 1, and 11 and are rejected under 35 USC 112, 1st paragraph, for their dependency upon claims 1, and 11.
The claims 1, and 11 recite “comprising a first set of content items, a token comprising metadata tags of one or more faces from a facial recognition analysis”, and “in the second set of content items according to the facial recognition analysis of the first set of content items”, which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for pre-AIA the inventor(s), at the time the application was filed, had possession of the claimed invention.
Applicant is kindly requested to show the examiner support in the original disclosure for the new or amended claims. See MPEP 714.02 and 2163.06 (“Applicant should specifically point out the support for any amendments made to the disclosure").
Claims 2-10, and 12-16 do not cure the deficiency of claims 1, and 11 and are rejected under 35 USC 112, 1st paragraph, for their dependency upon claims 1, and 11.
Claim 17 recite” receive from a client device of a collector in connection with a content item collection request identifying a collection folder, a link associated with trained facial recognition software and metadata tags of one or more faces from a facial recognition analysis of a set of content items on the client device of the collector”, which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for pre-AIA the inventor(s), at the time the application was filed, had possession of the claimed invention. Examiner was unable to locate the limitation mentioned above in the applicant specification, Applicant specification in paragraph 94 describes that the content management system can then distribute the generated link to the one or more submitters via email. Alternatively, the collector can distribute the generated link to the one or more submitters via email, instant message, text message, or by posting the link to a website, or any other means of distributing the link. However, in this claim it looks like the client device generates the link and distributes it!
Applicant is kindly requested to show the examiner support in the original disclosure for the new or amended claims. See MPEP 714.02 and 2163.06 (“Applicant should specifically point out the support for any amendments made to the disclosure").
Claims 18-20 do not cure the deficiency of claim 17 and are rejected under 35 USC 112, 1st paragraph, for their dependency upon claim 17.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL. —The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement.
The claims 1-3, 8, 10-12, and 14-17, recite “generating, by content management system, a token …” and “performing facial recognition analysis…utilizing the token”, and “run token button”. This claimed “token”, according to the claim is generated and includes tags and filters and is capable of being used to “identify at least one face or more faces in the “[second]” set of content items.
However, the specification is devoid of any description as to how (1) the token is generated and (2) how such token is capable of being utilized to identify the faces or objects as claimed.
The functions claimed are recited in pure desired result functional language and “It should be noted that the written description requirement under 112(a) is not satisfied by stating that one of ordinary skill in the art could devise an algorithm to perform the specialized programmed functions. For written description, the specification as filed must describe the claimed invention in sufficient detail so that one of ordinary skill in the art can reasonably conclude that the inventor had possession of the claimed invention. An original claim may lack written description when the claim defines the invention in functional language specifying a desired result, but the specification does not sufficiently identify how the inventor has devised the function to be performed, or result achieved.”
Portions of the specification merely repeats the desire result of a functional language, e.g. “the collector can send a token, containing the one or more tagged people and/or objects, to one or more submitters who can use the token to submit additional photographs based on the token. The disclosed technology addresses the need in the art for a collector to be able to collect additional photographs from one or more submitters. The metadata and metadata tags can be saved as a token. Then the collector can send a file request to others with the trained event token…the recognition software using the token from the collector…”.
The entirety of the claimed invention depends on this so called “token”, but such token finds no description as to how the applicant intends to be achieved this function. The specification indicates that the “recognition module 146” of Fig. 1, “can” somehow “an provide different tokens”. This “module” is explained with nothing more than desirable results functional capabilities.
In MPEP 2161.01, "computer-implemented functional claim language must still be evaluated for sufficient disclosure under the written description". And MPEP 2161.01(I) "generic claim language in the original disclosure does not satisfy the written description requirement if it fails to support the scope of the genus claimed." For computer-implemented inventions, the determination of the sufficiency of disclosure will require an inquiry into the sufficiency of both the disclosed hardware and the disclosed software due to the interrelationship and interdependence of computer hardware and software. The critical inquiry is whether the disclosure of the application relied upon reasonably conveys to those skilled in the art that the inventor had possession of the claimed subject matter as of the filing date.
As in MPEP 2161.01 (I), "The description requirement of the patent statute requires a description of an invention, not an indication of a result that one might achieve if one made that invention."). It is not enough that one skilled in the art could write a program to achieve the claimed function because the specification must explain how the inventor intends to achieve the claimed function to satisfy the written description requirement. See, e.g., Vasudevan Software, Inc. v. MicroStrategy, Inc., 782 F.3d 671, 681-683, 114 USPQ2d 1349, 1356, 1357 (Fed. Cir. 2015).
Therefore, claims 1-20 fail to comply with the written description requirement, since the specification is devoid of adequate description to perform the claimed functions.
Response to Arguments
Applicant's arguments filed 09/15/20225 have been fully considered but they are not persuasive:
Applicant submits on pages 13-14 of remarks filed on 09/15/20225 regarding rejection under 35 USC 112, that The Office Action rejects claim 1-20 under 35 U.S.C. § 112(a) as failing to comply with the written description requirement. Applicant hereby amends claims 17-20 to address the issues identified in the Office Action. Regarding claims 1-16, Applicant respectfully submits that the written description contained in the Specification meets the standard laid out in MPEP § 2163. Namely, Applicant respectfully submits that, based on the disclosure contained in the Specification, a skilled artisan would have understood the inventor to be in possession of the claimed invention at the time of filing. Applicant respectfully submits that the adequate description requirement is met. Further, Applicant hereby amends claims 17-20. Accordingly, the § 112(a) rejection of claims 1-20 should be withdrawn.
Examiner respectfully disagrees with applicant argument for 35 U.S.C. § 112(a) filed on 09/15/20225 on pages 13-14 of remarks.
Examiner refers applicant to the rejections under 35 USC 112(a), and 112(b) sections above.
Applicant submits on pages 14-15 of remarks filed on 09/15/20225 regarding claims 1, and 11 that Kamei fails to disclose "receiving [...] at least one content item from the client device of the submitter based on performing facial recognition at the client device of submitter utilizing the token, with the one or more metadata filter parameters, on a second set of content items associated with the submitter to identify at least one face of the one or more faces in the second set of content items according to the facial recognition analysis of the first set of content items".
Examiner respectfully disagrees with applicant argument for claims 1, and 11 filed on 09/15/20225 on pages 14-15 of remarks.
Firstly, as mentioned under 35USC 112(a) section above, the terms “first set of content items”, and “second set of content items” are not defined in the specification.
And secondly, “comprising a first set of content items, a token comprising metadata tags of one or more faces from a facial recognition analysis”, and “in the second set of content items according to the facial recognition analysis of the first set of content items”, which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for pre-AIA the inventor(s), at the time the application was filed, had possession of the claimed invention.
Kamei discloses [[¶3, Metadata that includes information relating to a situation in which a digital content was captured, for example, information about a date when the digital content was captured, a location at which the digital content was captured( equated to metadata filter parameter], a person who is an object of capturing and so on is recorded in association with the digital content], [¶106, A name and a position of a face of a person as an object are specified, for example, by capturing the object using an Exif-compliant digital still camera that has a face recognition function of specifying a name and a position of a face of the person by recognizing the face], and[¶¶101-102 As an attribute identified by the attribute ID 302, there are three attributes: an attribute 1 (capture date) that shows a date when the content identified by the original content ID 306 was captured; an attribute 2 (capture location) that shows a location at which the content identified by the original content ID 306 was captured( equated to metadatum filter parameter) ; and an attribute 3 (person) that shows a person included in the content identified by the original content ID 306], and [ ¶105, A location at which a content was captured is specified, for example, by capturing an object using an Exif-compliant digital still camera that has a position measuring function of specifying a regional name of the location based on longitude and latitude of the location by using a GPS (Global Positioning System)], and [see FIG. 3, [ ¶¶112-123, The original content ID 306 included in the original sub-metadata 311 is A0001. Therefore, a content corresponding to the original sub-metadata 311 is stored in the content storage device owned by the user A. It can be seen from the attribute information that a content identified by the content ID A0001 was captured in London on Aug. 16, 2003, and the user A and a user B are included in the content. …[0115] It can be seen from the attribute information that a content identified by the content ID C0005 was captured in Osaka on Aug. 16, 2003, and a user X is included in the content… [0119] It can be seen from the attribute information that a content identified by the content ID C0014 was captured in London on Feb. 6, 2005, and a user Y is included in the content. …It can be seen from the attribute information that a content identified by the content ID B0012 was captured in Beijing on Sep. 26, 2000, and a user Z and the user B are included in the content], and [see FIG 11 and corresponding text for more details, [¶¶225-229 The reception unit 207 in each of the content storage devices B and C receives the transmitted metadata and URL via the network 190 (step S915), and then transmits the received metadata and URL to the relevant metadata extraction unit 202. Upon receiving the transmitted metadata and URL, the relevant metadata extraction unit 202 transmits to the content management control unit 225, an instruction for searching the metadata storage unit 231 for metadata that includes attribute information including at least one attribute value that is identical to an attribute value included in attribute information in the original sub-metadata in the received metadata. The relevant metadata extraction unit 202 also transmits the received metadata and URL to the content management control unit 225. Upon receiving the instruction, metadata, and URL, the content management control unit 225 causes the metadata extraction unit 221 to search the metadata storage unit 231 for metadata that includes attribute information including at least one attribute value that is identical to an attribute value included in attribute information in the original sub-metadata in the received metadata (step S920). When the metadata is found (Yes in step S920), the content management control unit 225 sets a match flag corresponding to the attribute ID identifying the attribute information including the identical attribute value to "1” and transmits the extracted metadata and the received URL to the transmission unit 208. Upon receiving the transmitted metadata and URL, the transmission unit 208 transmits the received metadata to the received URL (i.e., the content storage device A) via the network 190 (step S925)], and [ see FIGS 3, 6B and corresponding text for more details].
Examiner Maintain the rejection.
Applicant submits on page 15 of remarks filed on 09/15/20225 regarding claim 17 that Kamei fails to disclose each of the limitations of currently amended independent claim 17. Specifically, Kamei fails to disclose "responsive to receiving an indication of a selection of the run token button, executing the trained facial recognition software on one or more content items indicated by the client device of the submitter to identify a face of the one or more faces indicated in the facial recognition analysis." Indeed, as discussed above, while Kamei discloses a "face recognition function," at best the face recognition function allows specification of a name and a position of a face. The face recognition function of Kamei does not identify faces in one or more content items indicated in a facial recognition analysis as more particularly recited above.
Examiner respectfully disagrees with applicant argument for claim 17 filed on 09/15/20225 on page 15 of remarks.
Kamei discloses: [¶¶11, 82, 84-85, 220-234, 304-305, 340-342, [see FIGS 3, 6B, and 11 and corresponding text for more details].
Examiner maintain the rejection and suggest application for compact prosecution either make the independent claim 17 limitations similar to independent claims 1, and 11 or vis-versa.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-5, 9-11,15-16, and 20 are rejected under 35 U.S.C. 102(a) (1) as being anticipated by Kamei (US 2011/0145305).
Regarding claim 1, Kamei discloses a computer implemented method comprising: generating, by a content management system in connection with a content item collection request identifying a collection folder comprising a first set of content items [¶84-85, The content management unit 201 stores and manages a digital photograph (i.e., a content) as an image encoded in JPEG format in association with metadata (equated to a token). The content management unit 201 transmits/receives data to/from the relevant metadata extraction unit 202, the display unit 204, the input reception unit 206, the reception unit 207, and the transmission unit 208. The content management unit 201 further includes a metadata extraction unit 221, a metadata updating unit 222, a content extraction unit 223, a content updating unit 224, a content management control unit 225, a metadata storage unit 231, a content storage unit 232, and a content correspondence table storage unit 233], and
a token comprising metadata tags of one or more faces from a facial recognition analysis [¶84, The content management unit 201 stores and manages a digital photograph (i.e., a content) as an image encoded in JPEG format in association with metadata (equated to a token)], and [¶106, A name and a position of a face of a person as an object are specified, for example, by capturing the object using an Exif-compliant digital still camera that has a face recognition function of specifying a name and a position of a face of the person by recognizing the face];and
distributing, by the content management system, the token with one or more metadata filter parameters based on the metadata tags to a client device of a submitter [¶3, Metadata that includes information relating to a situation in which a digital content was captured, for example, information about a date when the digital content was captured, a location at which the digital content was captured( equated to metadata filter parameter), a person who is an object of capturing and so on is recorded in association with the digital content], and[¶¶101-102 As an attribute identified by the attribute ID 302, there are three attributes: an attribute 1 (capture date) that shows a date when the content identified by the original content ID 306 was captured; an attribute 2 (capture location) that shows a location at which the content identified by the original content ID 306 was captured; and an attribute 3 (person) that shows a person included in the content identified by the original content ID 306], and [ ¶105, A location at which a content was captured is specified, for example, by capturing an object using an Exif-compliant digital still camera that has a position measuring function of specifying a regional name of the location based on longitude and latitude of the location by using a GPS (Global Positioning System)], and [ see FIG 11 and corresponding text for more details, [¶¶ 0222-223, Upon receiving the signal for actively collecting metadata, the content management control unit 225 causes the metadata extraction unit 221 to extract metadata corresponding to the displayed content from the metadata storage unit 231. The extracted metadata is transmitted to the transmission unit 208. Upon receiving the transmitted metadata, the transmission unit 208 transmits the received metadata and a URL of the content storage device A to URLs of all content storage devices stored in the address storage unit 209 via the network 190 (step S910)], and
receiving, by the content management system, at least one content item from the client device of the submitter based on performing facial recognition analysis at the client device of the submitter utilizing the token, with the one or more metadata filter parameters, on a second set of content items associated with the submitter to identify at least one face of the one or more faces in the second set of content items according to the facial recognition analysis of the first set of content items [¶3, Metadata that includes information relating to a situation in which a digital content was captured, for example, information about a date when the digital content was captured, a location at which the digital content was captured( equated to metadata filter parameter], a person who is an object of capturing and so on is recorded in association with the digital content], [¶106, A name and a position of a face of a person as an object are specified, for example, by capturing the object using an Exif-compliant digital still camera that has a face recognition function of specifying a name and a position of a face of the person by recognizing the face], and[¶¶101-102 As an attribute identified by the attribute ID 302, there are three attributes: an attribute 1 (capture date) that shows a date when the content identified by the original content ID 306 was captured; an attribute 2 (capture location) that shows a location at which the content identified by the original content ID 306 was captured( equated to metadatum filter parameter) ; and an attribute 3 (person) that shows a person included in the content identified by the original content ID 306], and [ ¶105, A location at which a content was captured is specified, for example, by capturing an object using an Exif-compliant digital still camera that has a position measuring function of specifying a regional name of the location based on longitude and latitude of the location by using a GPS (Global Positioning System)], and [see FIG. 3, [ ¶¶112-123, The original content ID 306 included in the original sub-metadata 311 is A0001. Therefore, a content corresponding to the original sub-metadata 311 is stored in the content storage device owned by the user A. It can be seen from the attribute information that a content identified by the content ID A0001 was captured in London on Aug. 16, 2003, and the user A and a user B are included in the content. …[0115] It can be seen from the attribute information that a content identified by the content ID C0005 was captured in Osaka on Aug. 16, 2003, and a user X is included in the content… [0119] It can be seen from the attribute information that a content identified by the content ID C0014 was captured in London on Feb. 6, 2005, and a user Y is included in the content. …It can be seen from the attribute information that a content identified by the content ID B0012 was captured in Beijing on Sep. 26, 2000, and a user Z and the user B are included in the content], and [see FIG 11 and corresponding text for more details, [¶¶225-229 The reception unit 207 in each of the content storage devices B and C receives the transmitted metadata and URL via the network 190 (step S915), and then transmits the received metadata and URL to the relevant metadata extraction unit 202. Upon receiving the transmitted metadata and URL, the relevant metadata extraction unit 202 transmits to the content management control unit 225, an instruction for searching the metadata storage unit 231 for metadata that includes attribute information including at least one attribute value that is identical to an attribute value included in attribute information in the original sub-metadata in the received metadata. The relevant metadata extraction unit 202 also transmits the received metadata and URL to the content management control unit 225. Upon receiving the instruction, metadata, and URL, the content management control unit 225 causes the metadata extraction unit 221 to search the metadata storage unit 231 for metadata that includes attribute information including at least one attribute value that is identical to an attribute value included in attribute information in the original sub-metadata in the received metadata (step S920). When the metadata is found (Yes in step S920), the content management control unit 225 sets a match flag corresponding to the attribute ID identifying the attribute information including the identical attribute value to "1” and transmits the extracted metadata and the received URL to the transmission unit 208. Upon receiving the transmitted metadata and URL, the transmission unit 208 transmits the received metadata to the received URL (i.e., the content storage device A) via the network 190 (step S925)], and [ see FIGS 3, 6B and corresponding text for more details].
and storing, by the content management system, the at least one content item from the client device of the submitter in the collection folder [ see FIG 11 and corresponding text for more details, [¶¶ 231, The reception unit 207 in the content storage device A receives the transmitted metadata via the network 190 within a certain period of time (e.g., two hours) (Yes in step S930). Upon receiving the transmitted metadata, the reception unit 207 transmits the received metadata to the content management control unit 225. Upon receiving the transmitted metadata, the content management control unit 225 causes the metadata updating unit 222 to add the received metadata, as additional sub-metadata, to metadata that is stored in the metadata storage unit 231 and corresponds to a content currently being displayed to update the metadata (step S935)].
Regarding claim 2, Kamei discloses, further comprising: generating, by the content management system for a first set of content items, an additional token for identifying an object based on an object detection analysis of the first set of content items; distributing, by the content management system, the additional token to the client device of the submitter; and receiving, by the content management system, a content item from the client device of the submitter in response to performing object recognition analysis at the client device of the submitter utilizing the additional token on the second set of content items associated with the submitter to identify the object in the second set of content items [ see FIGS 3, 6B and corresponding text for more details], and [¶11, In order to achieve the above aim, one aspect of the present invention is a content storage processing system comprising: a content storage unit operable to store therein a plurality of contents; a metadata storage unit operable to store therein a plurality of pieces of metadata being in one-to-one correspondence with the plurality of contents, the plurality of pieces of metadata each including two or more pieces of attribute information showing attributes of a corresponding content; a content specification reception unit operable to receive specification of a content among the plurality of contents stored in the content storage unit; an original metadata extraction unit operable to extract a first piece of metadata corresponding to the specified content from the plurality of pieces of metadata stored in the metadata storage unit; a relevant metadata search unit operable to search the metadata storage unit for a second piece of metadata including one or more pieces of attribute information that are identical to corresponding one or more pieces of attribute information included in the first metadata, and extract the second metadata; and an attribute information addition unit operable, when the content specification reception unit receives the specification, the original metadata extraction unit extracts the first metadata, and the relevant metadata search unit extracts the second metadata, to perform either or both of the following: addition of one or more pieces of attribute information included in the second metadata to the first metadata; and addition of one or more pieces of attribute information included in the first metadata to the second metadata], and [¶¶84-85,222-229, 231].
Regarding claim 3, Kamei discloses, wherein distributing the token with the one or more metadata filter parameters comprises determining the one or more metadata filter parameters for identifying content items comprising the one or more faces in connection with an event from the second set of content items [¶101, As an attribute identified by the attribute ID 302, there are three attributes: an attribute 1 (capture date) that shows a date when the content identified by the original content ID 306 was captured; an attribute 2 (capture location) that shows a location at which the content identified by the original content ID 306 was captured; and an attribute 3 (person) that shows a person included in the content identified by the original content ID 306], and [0105] A location at which a content was captured is specified, for example, by capturing an object using an Exif-compliant digital still camera that has a position measuring function of specifying a regional name of the location based on longitude and latitude of the location by using a GPS (Global Positioning System)], and [0121] The match flag 305 corresponding to the attribute 2 (capture location) is set to "1". Therefore, the capture location London is identical to that of a content corresponding to the original sub-metadata 311], and [¶173].
Regarding claim 4, Kamei discloses, wherein determining the one or more metadata filter parameters comprises determining a date filter, a time filter, a location filter, or a season filter [ see FIG. 3, Attribute information (#310, 302, 304), see FIG 6 and corresponding text for more detail], and [¶205, When there is the additional sub-metadata including the attribute value corresponding to the attribute 1 (capture date) that is identical to that in the original sub-metadata (Yes in step S580), the content management control unit 225 creates a character string "`Person` was in `capture location` at the time". The character string is created by using attribute values corresponding to the attribute 3 (person) and the attribute 2 (capture location) included in the additional sub-metadata as variables].
Regarding claim 5, Kamei discloses, wherein receiving the at least one content item from the client device of the submitter comprises receiving the at least one content item with metadata corresponding to the event [¶101, As an attribute identified by the attribute ID 302, there are three attributes: an attribute 1 (capture date) that shows a date when the content identified by the original content ID 306 was captured; an attribute 2 (capture location) that shows a location at which the content identified by the original content ID 306 was captured; and an attribute 3 (person) that shows a person included in the content identified by the original content ID 306], and [0105] A location at which a content was captured is specified, for example, by capturing an object using an Exif-compliant digital still camera that has a position measuring function of specifying a regional name of the location based on longitude and latitude of the location by using a GPS (Global Positioning System)], and [0121] The match flag 305 corresponding to the attribute 2 (capture location) is set to "1". Therefore, the capture location London is identical to that of a content corresponding to the original sub-metadata 311], and [¶173], and [ see FIG. 3, Attribute information (#310, 302, 304), see FIG 6 and corresponding text for more detail].
Regarding claim 9, Kamei discloses, further comprising: detecting, by the content management system, a face of an additional submitter within the at least one content item from the client device of the submitter; and generating, by the content management system, a communication to a client device of the additional submitter including a notification that the face of the additional submitter was detected within the at least one content item from the client device of the submitter [¶¶358-359, The above-mentioned content chain display operation enables a user A who owns the content storage device A to transmit a content that is being viewed by the user A to a user B who is a friend of the user A and owns the content storage device B. Then, one or more contents that are owned by the user B and relevant to the transmitted content can be displayed on a display in the content storage device B. Furthermore, the content chain display operation enables the user B to select a content from among one or more contents being displayed, and transmit the selected content to the content storage device A owned by the user A. Then, one or more contents that are owned by the user A and relevant to the transmitted content can be displayed on a display in the content storage device A], and [See FIGS. 3, 6A-6B and corresponding text for more details], and [¶23].
Regarding claim 10, Kamei discloses receiving, by the content management system, one or more tags from the client device of the submitter identifying one or more additional faces within the second set of content items; updating, by the content management system, the token to identify the one or more additional faces based on a facial recognition analysis of the second set of content items and based on the one or more tags; and providing, by the content management system to a collector associated with the collection folder, a notification of the one or more tags received from the client device of the submitter [¶¶358-359, The above-mentioned content chain display operation enables a user A who owns the content storage device A to transmit a content that is being viewed by the user A to a user B who is a friend of the user A and owns the content storage device B. Then, one or more contents that are owned by the user B and relevant to the transmitted content can be displayed on a display in the content storage device B. Furthermore, the content chain display operation enables the user B to select a content from among one or more contents being displayed, and transmit the selected content to the content storage device A owned by the user A. Then, one or more contents that are owned by the user A and relevant to the transmitted content can be displayed on a display in the content storage device A], and [See FIGS. 3, 6A-6B and corresponding text for more details], and [¶23].
Regarding claim 11, the claim is interpreted and rejected for the same rational set forth in claim 1.
Regarding claim 15, Kamei discloses further comprising instructions that, when executed by the at least one processor, cause the system to: generate an additional token trained to identify the one or more faces via the facial recognition analysis against at first set of content items, and distribute the additional token to the client device of the submitter without the one or more metadata filter parameters. [ see FIGS 3, 6B and corresponding text for more details], and [¶11, In order to achieve the above aim, one aspect of the present invention is a content storage processing system comprising: a content storage unit operable to store therein a plurality of contents; a metadata storage unit operable to store therein a plurality of pieces of metadata being in one-to-one correspondence with the plurality of contents, the plurality of pieces of metadata each including two or more pieces of attribute information showing attributes of a corresponding content; a content specification reception unit operable to receive specification of a content among the plurality of contents stored in the content storage unit; an original metadata extraction unit operable to extract a first piece of metadata corresponding to the specified content from the plurality of pieces of metadata stored in the metadata storage unit; a relevant metadata search unit operable to search the metadata storage unit for a second piece of metadata including one or more pieces of attribute information that are identical to corresponding one or more pieces of attribute information included in the first metadata, and extract the second metadata; and an attribute information addition unit operable, when the content specification reception unit receives the specification, the original metadata extraction unit extracts the first metadata, and the relevant metadata search unit extracts the second metadata, to perform either or both of the following: addition of one or more pieces of attribute information included in the second metadata to the first metadata; and addition of one or more pieces of attribute information included in the first metadata to the second metadata], and [¶¶84-85,222-229, 231].
Regarding claim 16, Kamei discloses, further comprising instructions that, when executed by the at least one processor, cause the system to: generate an additional token trained to identify one or more objects based on an object recognition analysis of a first set of content items; and distribute the additional token to the client device of the submitter to obtain one or more additional content items comprising the one or more objects utilizing the additional token [ see FIGS 3, 6B and corresponding text for more details], and [¶11, In order to achieve the above aim, one aspect of the present invention is a content storage processing system comprising: a content storage unit operable to store therein a plurality of contents; a metadata storage unit operable to store therein a plurality of pieces of metadata being in one-to-one correspondence with the plurality of contents, the plurality of pieces of metadata each including two or more pieces of attribute information showing attributes of a corresponding content; a content specification reception unit operable to receive specification of a content among the plurality of contents stored in the content storage unit; an original metadata extraction unit operable to extract a first piece of metadata corresponding to the specified content from the plurality of pieces of metadata stored in the metadata storage unit; a relevant metadata search unit operable to search the metadata storage unit for a second piece of metadata including one or more pieces of attribute information that are identical to corresponding one or more pieces of attribute information included in the first metadata, and extract the second metadata; and an attribute information addition unit operable, when the content specification reception unit receives the specification, the original metadata extraction unit extracts the first metadata, and the relevant metadata search unit extracts the second metadata, to perform either or both of the following: addition of one or more pieces of attribute information included in the second metadata to the first metadata; and addition of one or more pieces of attribute information included in the first metadata to the second metadata], and [¶¶84-85,222-229, 231].
Regarding claim 20, Kamei discloses further comprising instructions thereon that, when executed by at least one processor, cause a computer system to execute trained facial recognition software with the metadata by identifying one or more content items that include the one or more faces and comprise metadata with a date, a time, or a location corresponding to the metadata tags[ see FIG. 3, Attribute information (#310, 302, 304), see FIG 6 and corresponding text for more detail], and [¶205, When there is the additional sub-metadata including the attribute value corresponding to the attribute 1 (capture date) that is identical to that in the original sub-metadata (Yes in step S580), the content management control unit 225 creates a character string "`Person` was in `capture location` at the time". The character string is created by using attribute values corresponding to the attribute 3 (person) and the attribute 2 (capture location) included in the additional sub-metadata as variables], and [¶84, The content management unit 201 stores and manages a digital photograph (i.e., a content) as an image encoded in JPEG format in association with metadata], and [¶106, A name and a position of a face of a person as an object are specified, for example, by capturing the object using an Exif-compliant digital still camera that has a face recognition function of specifying a name and a position of a face of the person by recognizing the face], and [¶¶3, 106, 101-102, 105, 112-113, 225-229, [ see FIGS 3, 6B and corresponding text for more details].
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 6-8, 12-14, and 17-19, are rejected under 35 U.S.C. 103 as being unpatentable over US Patent No. (US2011/0145305) issued to Kamei, and in view of US Patent No. Application US2014/0067929 issued to Kirigin el at. Hereinafter, referred to as Kirigin (filed in IDS 03/03/2022).
Regarding claim 6, Kamei discloses, further comprising determining, by the content management system based on a selection of one or more selectable options, one or more customized triggers associated with the collection folder; [¶¶304-305, The selection rule information storage unit 1412 is a storage area in which selection rule information for searching for metadata is stored. The selection rule information shows priority levels of three attribute IDs each identifying the attribute 1 (capture date), the attribute 2 (capture location), and the attribute 3 (person)]. FIG. 18 shows a data structure of the selection rule information stored in the selection rule information storage unit 1412].
Kamei doses not explicitly disclose, however, Kirigin discloses and in response to detecting a trigger of the one or more customized triggers in message content generated based on a user input, generating, by the content management system, a prompt to create the content item collection request and a link to the collection folder to provide to the client device of the submitter [Abstract, A file set viewing window is opened when a second user activates the file-sharing link. The at least one file sharing access server receives a request to upload one or more files by the second user via the file sharing link, and the at least one file access server receives the uploaded one or more files and stores the files to a location designated by the link], and [see FIGS.5-12, user interface menu, menu screen 60 opens allowing the user to select a DROPBOX option]
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teaching of Kamei by incorporating “e-mail message that can be used to share the file sharing link”, as taught by Kirigin. One could have been motivated to do so to, once the link is crea