DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
This action is responsive to the amendments and remarks received 04 December 2025. Claims 1 - 9, 11 - 19 and 21 - 31 are currently pending.
Claim Objections
Claim 1 is objected to because of the following informalities: Line 8 of claim 1 recites, in part, “the receiving device” which appears to contain inconsistent claim terminology. The Examiner suggests amending the claim to --the receiving mobile device-- in order to maintain consistency with line 3 of claim 1 and to improve the clarity and precision of the claim. Appropriate correction is required.
Claim 1 is objected to because of the following informalities: Line 9 of claim 1 recites, in part, “the behavior of the user of the receiving mobile device” which appears to contain inconsistent claim terminology. The Examiner suggests amending the claim to --the user’s behavior of. Appropriate correction is required.
Claim 1 is objected to because of the following informalities: Lines 10 - 11 of claim 1 recite, in part, “the at least one image archived on the receiving mobile” which appears to contain inconsistent claim terminology and/or a minor informality. The Examiner suggests amending the claim to --the at least one image archived image on the receiving mobile-- in order to maintain consistency with lines 5 - 6 of claim 1 and to improve the clarity and precision of the claim. Appropriate correction is required.
Claim 6 is objected to because of the following informalities: Line 1 of claim 6 recites, in part, “the meta data comprises” which appears to contain a typographical error and/or a minor informality. The Examiner suggests amending the claim to --the metadata comprises-- in order to improve the clarity and precision of the claim. Appropriate correction is required.
Claim 11 is objected to because of the following informalities: Line 13 of claim 11 recites, in part, “the receiving device” which appears to contain inconsistent claim terminology. The Examiner suggests amending the claim to --the receiving mobile device-- in order to maintain consistency with lines 1 - 2 of claim 11 and to improve the clarity and precision of the claim. Appropriate correction is required.
Claim 11 is objected to because of the following informalities: Lines 13 - 15 of claim 11 recite, in part, “blocking or archiving the duplicate image based on the behavior of the user of the at least one image from being archived by the receiving mobile device when” which appears to contain inconsistent claim terminology, grammatical errors and/or minor informalities. The Examiner suggests amending the claim to --blocking or archiving the duplicate image based on the user’s behavior of. Appropriate correction is required.
Claim 12 is objected to because of the following informalities: Lines 3 - 4 of claim 12 recite, in part, “and the images archived on the receiving device and images previously received at the receiving device” which appears to contain inconsistent claim terminology and/or minor informalities. The Examiner suggests amending the claim to --and the images on the receiving mobile device and images previously received at the receiving mobile device-- in order to maintain consistency with lines 1 - 2 and line 10 of claim 12 and to improve the clarity and precision of the claim. Appropriate correction is required.
Claim 16 is objected to because of the following informalities: Lines 1 - 2 of claim 16 recite, in part, “the meta data comprises” which appears to contain a typographical error and/or a minor informality. The Examiner suggests amending the claim to --the metadata comprises-- in order to improve the clarity and precision of the claim. Appropriate correction is required.
Claim 21 is objected to because of the following informalities: Line 8, line 10 and line 13 of claim 21 each recite, in part, “the receiving device” which appear to contain inconsistent claim terminology. The Examiner suggests amending line 8, line 10 and line 13 of claim 21 to --the receiving mobile device-- in order to maintain consistency with line 3 of claim 21 and to improve the clarity and precision of the claim. Appropriate correction is required.
Claim 21 is objected to because of the following informalities: Lines 11 - 12 of claim 21 recite, in part, “the image when the comparator finds” which appears to contain inconsistent claim terminology and/or a minor informality. The Examiner suggests amending the claim to --the at least one image when the comparator finds-- in order to maintain consistency with line 5 and lines 7 - 8 of claim 21 and to improve the clarity and precision of the claim. Appropriate correction is required.
Claim 21 is objected to because of the following informalities: Line 13 of claim 21 recites, in part, “the behavior of the user of the receiving device” which appears to contain inconsistent claim terminology. The Examiner suggests amending the claim to --the user’s behavior mobile device-- in order to maintain consistency with line 3 and line 10 of claim 21 and to improve the clarity and precision of the claim. Appropriate correction is required.
Claim 26 is objected to because of the following informalities: Lines 1 - 2 of claim 26 recite, in part, “the meta data vomprises at least one of” which appears to contain typographical errors and/or minor informalities. The Examiner suggests amending the claim to --the metadata comprises at least one of-- in order to improve the clarity and precision of the claim. Appropriate correction is required.
Claim 31 is objected to because of the following informalities: Line 9 of claim 31 recites, in part, “the receiving device” which appears to contain inconsistent claim terminology. The Examiner suggests amending the claim to --the receiving mobile device-- in order to maintain consistency with lines 4 - 5 of claim 31 and to improve the clarity and precision of the claim. Appropriate correction is required.
Claim 31 is objected to because of the following informalities: Lines 9 - 11 of claim 31 recite, in part, “blocking or archiving the duplicate image based on the behavior of the user of the receiving mobile device from archiving at least one received image when a duplicate” which appears to contain inconsistent claim terminology, grammatical errors and/or minor informalities. The Examiner suggests amending the claim to --blocking or archiving the duplicate image based on the user’s behavior of the receiving mobile device. Appropriate correction is required.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “input/output module for receiving”, “an image archive for storing”, “a comparator… adapted to compare”, “a blocker… blocks” and “an archiver adapted to archive” in claims 11, 21, 22, 25 - 27 and 29.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112(a)
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 6, 16 and 26 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. In the amendment received 04 December 2025 claims 6, 16 and 26 have been amended to include “wherein the meta data comprises at least one of RGB values, pixel values and average pixel values” (emphasis added). The Examiner cannot find support for the newly amended claim scopes of claims 6, 16 and 26 in the original disclosure. The original disclosure only mentions metadata one time and describes, at best, “comparing the received image to at least one of archived images and previously received images” and that “[s]uch comparison may utilize metadata, RGB values, pixel values and average pixel values to perform the comparisons”, see at least the abstract, figure 1, page 1 paragraph 0003, page 2 paragraph 0009 and page 4 paragraph 0018 of the original disclosure. In other words, the Examiner asserts that the original disclosure only supports and describes metadata in a general and broad sense. However, the original disclosure makes no mention of what data and/or information the metadata comprises much less that the metadata comprises at least one of RGB values, pixel values and average pixel values. Therefore, claims 6, 16 and 26 are rejected as new matter.
Claim 22 is rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. In the amendment received 04 December 2025 claim 22 has been amended to include “wherein the designated location is designated by the artificial intelligence based on user behavior” (emphasis added). The Examiner cannot find support for the newly amended claim scope of claim 22 in the original disclosure. The original disclosure describes, at best, that “the blocking of the viewing of the image 112 may include rejecting the image, deleting the image, archiving the image in a designated folder, and/or following a system setting that determines the action taken on received images that are duplicates” and “utilizing artificial intelligence to learn a user's learned behavior. For example, the apparatus 300 may learn that a user enjoys receiving images in black and white. In such a scenario, the apparatus would set itself to allow the receipt of duplicate images in different colors. In other circumstances, the apparatus 300 may determine that the user consider the same image with the different colors duplicate. As such, the apparatus 300 would block such images in such scenarios”, see at least page 2 paragraph 0010 and page 4 paragraph 0020 - page 5 paragraph 0021 of the original disclosure. In other words, the Examiner asserts that the original disclosure only supports and describes utilizing artificial intelligence to learn whether a duplicate image should be blocked and/or archived or not. However, the original disclosure makes no mention of utilizing artificial intelligence to learn and/or designate where duplicate images should be archived. Therefore, claim 22 is rejected as new matter.
Claim Rejections - 35 USC § 112(b)
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1 - 9, 11 - 19 and 21 - 31 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 1 recites the limitation "the metadata of at least one archived image on the receiving mobile device of the other user;" in lines 6 - 7. There is insufficient antecedent basis for this limitation in the claim.
Claim 1 recites the limitation "the user’s behavior of the receiving device with duplicate images," (emphasis added) in lines 7 - 8. There is insufficient antecedent basis for this limitation in the claim.
Claim 1 recites the limitation "the duplicate image" in lines 8 - 9. There is insufficient antecedent basis for this limitation in the claim.
Claim 6 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention because it is unclear as to which metadata “the meta data” recited on line 1 is referencing. Is it referring to the “metadata of the image” recited on line 5 of claim 1, “the metadata of at least one archived image” recited on line 5 of claim 1 or both? Clarification and appropriate correction are required. For purposes of examination the Examiner will treat the claim as referencing at least one of the “metadata of the image” recited on line 5 of claim 1 and “the metadata of at least one archived image” recited on line 5 of claim 1.
Claim 11 recites the limitation "the metadata of at least one of archived images on the receiving mobile device” in lines 9 - 10. There is insufficient antecedent basis for this limitation in the claim.
Claim 11 recites the limitation "the user’s behavior of the receiving device with duplicate images," (emphasis added) in lines 12 - 13. There is insufficient antecedent basis for this limitation in the claim.
Claim 11 recites the limitation "the duplicate image" in lines 13 - 14. There is insufficient antecedent basis for this limitation in the claim.
Claim 16 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention because it is unclear as to which metadata “the meta data” recited on line 1 is referencing. Is it referring to the “metadata of at least one image” recited on line 9 of claim 11, “the metadata of at least one of archived images on the receiving mobile device” recited on lines 9 - 10 of claim 11 or both? Clarification and appropriate correction are required. For purposes of examination the Examiner will treat the claim as referencing at least one of the “metadata of at least one image” recited on line 9 of claim 11 and “the metadata of at least one of archived images on the receiving mobile device” recited on lines 9 - 10 of claim 11.
Claim 21 recites the limitation "the metadata of at least one of a previously received image to the receiving mobile device” in lines 6 - 7. There is insufficient antecedent basis for this limitation in the claim.
Claim 21 recites the limitation "the user’s behavior of the receiving device with duplicate images," (emphasis added) in lines 10 - 11. There is insufficient antecedent basis for this limitation in the claim.
Claim 22 recites the limitation "the at least one received image” (emphasis added) in line 5. There is insufficient antecedent basis for this limitation in the claim.
Claim 26 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention because it is unclear as to which metadata “the meta data” recited on line 1 is referencing. Is it referring to the “metadata of at least one image” recited on line 5 of claim 21, “the metadata of at least one of a previously received image to the receiving mobile device” recited on lines 6 - 7 of claim 21 or both? Clarification and appropriate correction are required. For purposes of examination the Examiner will treat the claim as referencing at least one of the “metadata of at least one image” recited on line 5 of claim 21 and “the metadata of at least one of a previously received image to the receiving mobile device” recited on lines 6 - 7 of claim 21.
Claim 29 recites the limitation "the at least one received image” (emphasis added) in line 2. There is insufficient antecedent basis for this limitation in the claim.
Claim 31 recites the limitation "the user’s behavior of the receiving device with duplicate images," (emphasis added) in lines 8 - 9. There is insufficient antecedent basis for this limitation in the claim.
Claim 31 recites the limitation "the duplicate image" in line 10. There is insufficient antecedent basis for this limitation in the claim.
Claims 2 - 5, 7 - 9, 12 - 15, 17 - 19, 23 - 25, 27, 28, 29 and 30 are also rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, due to being dependent upon a rejected base claim(s) but would be withdrawn from the rejection if their base claim(s) overcome the rejection.
Response to Arguments
Applicant's arguments filed 04 December 2025 have been fully considered but they are not persuasive.
On pages 10 - 11 of the remarks the Applicant’s Representative argues that “none of the cited references disclose or teach utilizing artificial intelligence to learn a user's behavior and, according to the user's behavior of the receiving device with duplicate images, at least one of blocking or archiving the duplicate image based on the behavior of the user of the receiving mobile device when a duplicate metadata is found between the image and the at least one image archived on the receiving mobile device.”
The Examiner respectfully disagrees.
Initially, the Examiner asserts that Applicant's arguments fail to comply with 37 CFR 1.111(b) because they amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references.
Furthermore, the Examiner asserts that at least Cheng et al. disclose utilizing artificial intelligence to learn a user's behavior and, according to the user's behavior of the receiving device with duplicate images, at least one of blocking or archiving the duplicate image based on the behavior of the user of the receiving mobile device when a duplicate is found between the image and the at least one image archived on the receiving mobile device, see at least figures 1 - 5, page 3 paragraphs 0023 and 0027 - 0029, page 4 paragraphs 0033 - 0037, page 7 paragraphs 0052 - 0053, page 8 paragraphs 0066 - 0067, page 10 paragraph 0084 - page 11 paragraph 0086, page 11 paragraphs 0088 - 0089, page 12 paragraphs 0097 - 0098, page 13 paragraph 0109 and page 14 paragraphs 0120 - 0121 of Cheng et al. wherein they disclose that “functional image archiving suggestions can be personalized to user behavior (e.g., using a machine-learning model as described herein) to suggest archiving more of the types of images that the user archives and less of the types of utility photos that the user does not archive, for example” [0027], that when “making functional image archiving suggestions to a user, it may be helpful for an image system to make functional image archiving suggestions that are in line with a user's previous functional image archiving activity. To make functional image archiving suggestions, a probabilistic model (or other model as described below in conjunction with FIG. 4) can be used to make an inference (or prediction) about how likely an image is to be a functional image and how likely a user is to archive an image or group of images” [0028], that the “probabilistic model can be trained with data including previous functional image archiving activity of one or more users. Some implementations can include generating a functional image archiving suggestion for one or more images having a functional image score based on objects in the image or data associated with functional images. The functional image score may be based on an inference from a probabilistic model that is trained using data for which respective users have provided permission for use in training the probabilistic model. Such data may include user image data and image activity data (e.g., archiving data)” [0029], that “the suggestion can be based on a user's previous archiving activity, which can permit the image archiving suggestions to be tailored to different users. Yet another advantage is that the methods and systems described herein can dynamically learn new thresholds (e.g., for confidence scores, etc.) and provide suggestions for images that match the new thresholds” [0034], that “the disclosed functional image archiving can be applied to images other than functional images, such as poor quality images (e.g., pocket shots, blurry images, poorly exposed images, etc.), near duplicates (e.g., the lowest quality near duplicate images), and gifted images (e.g., memes, etc., which can include images received via a social network, messaging service, or the like)” [0035], that “[m]ultiple-source functional image signal prediction can be used, for example, for worse duplicates archiving suggestions (e.g., recommending lower quality duplicate or near-duplicate images for archiving, such as images captured in a burst or near in time at the same location)” [0037], that the “functional image score can indicate the suitability of the image for archiving, e.g., the score can represent a predicted likelihood of a user designating the image for archiving” [0052], that in “some implementations, the archive suggestions are not presented in block 312 and the selected images are designated for automatic archival without user input if particular criteria are met” [0084], that “user decisions to archive or not archive images, as well as determined image content, features, categories, and/or other image characteristics of archived images and non-archived images suggested for archival, can be provided to a machine-learning model to help train or tune the model” [0088], that “a machine-learning model can be trained with such user selections, or other model (e.g., statistical model, etc.) can use such frequencies of archiving and non-archiving user selections to determine images to archive using the model (e.g., images having labels with a high frequency of archiving)” [0089] and that “logistic regression can be used for personalization (e.g., personalizing functional image archiving suggestions based on a user's pattern of archiving activity)” [0120].
The Examiner asserts that, as shown herein above and in the cited portions, Cheng et al. disclose utilizing a machine learning model, i.e., artificial intelligence, to learn a user’s image archiving behavior and utilizing the machine learning model to decide whether or not to automatically block and/or archive images of certain types. In addition, the Examiner asserts that Cheng et al. disclose that their disclosed invention can be applied to duplicate and near-duplicate images.
The Examiner notes that Cheng et al. fail to disclose expressly identifying a duplicate image by finding duplicate metadata.
However, the Examiner asserts that Adams et al. disclose, at least, at least one of blocking or archiving the duplicate image when a duplicate metadata is found between the image and the at least one image archived on the receiving device, see at least figures 2, 3 and 6, page 1 paragraphs 0009 - 0016, page 3 paragraphs 0030 and 0038 - 0042 and page 4 paragraphs 0046 - 0049 of Adams et al. wherein they disclose “determining means for, when the comparing means compares the subset of metadata of the image to be transferred with the corresponding subset of metadata of each of the any image which is already stored at the second location, determining whether the subset of metadata of the image to be transferred is identical to the corresponding subset of metadata of the any image already stored at the second location”, that “when the determining means determines that the subset of metadata of the image to be transferred is identical to the corresponding subset of metadata of the any image already stored at the second location, preventing the transfer of the image from the first location to the second location” and that “image duplication prevention unit (DPU) 108 performs processing for determining, when receiving images from an external device such as a camera or a mobile phone, whether an image has been previously stored in the storage device 106. The DPU 108 compares a hash of metadata of the image to be received with a hash of metadata of each image already stored in the storage device. If there is a hash collision, which indicates two identical images, then the transfer of the image, which is the subject of the hash collision, will not be permitted... Alternatively, after the hash collision is identified, the transfer of the image which is the subject of the hash collision with an already stored image may be permitted but a message is displayed to the user via the display 109 and/or the display of the external device that one of the two identical images will be deleted after the transfer.”
The Examiner asserts that, as shown herein above and in the cited portions, Adams et al. disclose that a duplicate image is identified by finding duplicate metadata.
Therefore, the Examiner asserts that, at least, Cheng et al. in view of Adams et al. disclose the aforementioned disputed claim limitation(s).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
Claims 1 - 6, 11 - 16, 21 - 27, 30 and 31 are rejected under 35 U.S.C. 103 as being unpatentable over Cheng et al. U.S. Publication No. 2019/0197364 A1 in view of Adams et al. U.S. Publication No. 2014/0081926 A1.
- With regards to claim 1, Cheng et al. disclose a method for managing receiving duplicate images, (Cheng et al., Fig. 3, Pg. 1 ¶ 0004 - 0005, Pg. 3 ¶ 0023 and 0030 - 0031, Pg. 4 ¶ 0033, 0035 and 0037, Pg. 6 ¶ 0044 - 0047) the method comprising the steps of: sending an image from a mobile device of a user, to a receiving mobile device of another user; (Cheng et al., Figs. 1 - 3, Pg. 2 ¶ 0019, Pg. 3 ¶ 0030 - 0031, Pg. 5 ¶ 0038 - 0040 and 0042 - 0043, Pg. 6 ¶ 0045 - 0049, Pg. 9 ¶ 0071 - 0073, Pg. 11 ¶ 0089 - 0093, Pg. 16 ¶ 0137 [“determine which images among an incoming stream of images are functional in nature and to archive those functional images”, “server system 102 and/or one or more client devices 120-126 can provide a functional image archiving program” and “a camera, cell phone, tablet computer, wearable device, or other client device can capture one or more images and can perform the method 200. In addition, or alternatively, a client device can send one or more captured images to a server over a network, and the server can process the images using method 200”]) comparing the image to at least one archived image on the receiving mobile device of the other user; (Cheng et al., Figs 1 &. 3, Pg. 1 ¶ 0004 - 0005, Pg. 3 ¶ 0023 and 0030 - 0031, Pg. 4 ¶ 0033 and 0035, Pg. 4 ¶ 0037 - Pg. 5 ¶ 0040, Pg. 5 ¶ 0042 - 0043, Pg. 6 ¶ 0045 - 0047, Pg. 11 ¶ 0091 - 0093, Pg. 16 ¶ 0137 [“a multiple-source prediction (e.g., determining a functional image signal based on multiple images)”]) and utilizing artificial intelligence to learn a user’s behavior (Cheng et al., Pg. 3 ¶ 0024 and 0027 - 0029, Pg. 4 ¶ 0033 - 0037, Pg. 7 ¶ 0052 - 0053, Pg. 11 ¶ 0088 - 0089, Pg. 12 ¶ 0097 - 0098, Pg. 13 ¶ 0109, Pg. 14 ¶ 0120 - 0121) and, according to the user’s behavior of the receiving device with duplicate images, (Cheng et al., Pg. 3 ¶ 0027 - 0029, Pg. 4 ¶ 0033 - 0037, Pg. 7 ¶ 0052 - 0053, Pg. 11 ¶ 0088 - 0089, Pg. 12 ¶ 0097 - 0098, Pg. 13 ¶ 0105 - 0106 and 0109, Pg. 14 ¶ 0120 - 0121, Pg. 15 ¶ 0128) at least one of blocking or archiving the duplicate image based on the behavior of the user of the receiving mobile device when a duplicate is found between the image and the at least one image archived on the receiving mobile device. (Cheng et al., Abstract, Figs. 1 & 3, Pg. 2 ¶ 0020, Pg. 3 ¶ 0023, 0025 and 0030 - 0031, Pg. 4 ¶ 0035, Pg. 4 ¶ 0037 - Pg. 5 ¶ 0043, Pg. 6 ¶ 0045 and 0047, Pg. 8 ¶ 0066, Pg. 9 ¶ 0071 and 0073, Pg. 10 ¶ 0080, Pg. 10 ¶ 0084 - Pg. 11 ¶ 0086, Pg. 11 ¶ 0091 - 0093, Pg. 16 ¶ 0137 [“Archiving of functional images is described herein to help illustrate the disclosed subject matter… It will be appreciated that the disclosed functional image archiving can be applied to images other than functional images, such as poor quality images (e.g., pocket shots, blurry images, poorly exposed images, etc.), near duplicates (e.g., the lowest quality near duplicate images), and gifted images (e.g., memes, etc., which can include images received via a social network, messaging service, or the like)”, “as described below in method 300, actions can be performed on images associated with a functional image signal, including, for example, auto-archiving, archiving based on user input, or deleting the images” (emphasis added), “an auto-archiving system can request that images meeting particular archiving criteria be archived without requiring user input or intervention (e.g., without providing suggestions of block 312)” and “archiving can include moving the image data of the archived images to be stored in an archive storage space of data storage (e.g., a storage area, a storage device, a class of storage such as long term storage, an archive folder, etc. that is different than the storage used for non-archived images), e.g., to reduce storage space usage on user devices and/or server devices”]) Cheng et al. fail to disclose explicitly comparing metadata of the image to the metadata of at least one archived image; and at least one of blocking or archiving when a duplicate metadata is found, i.e., Cheng et al. fail to disclose explicitly identifying a duplicate image by finding duplicate metadata. Pertaining to analogous art, Adams et al. disclose a method for managing receiving duplicate images, (Adams et al., Abstract, Figs. 3 & 6, Pg. 1 ¶ 0003 and 0007, Pg. 3 ¶ 0030 and 0040 - 0041) the method comprising the steps of: sending an image from a mobile device of a user, to a receiving device; (Adams et al., Figs. 1 & 3 - 6, Pg. 1 ¶ 0005 - 0007 and 0010 - 0016, Pg. 2 ¶ 0026, Pg. 3 ¶ 0030 - 0031, 0033, 0035 - 0039 and 0041 - 0042, Pg. 4 ¶ 0045 - 0047 and 0049 [“image duplication prevention unit (DPU) 108 performs processing for determining, when receiving images from an external device such as a camera or a mobile phone, whether an image has been previously stored in the storage device 106”]) comparing metadata of the image to the metadata of at least one archived image on the receiving device; (Adams et al., Abstract, Figs. 3 & 6, Pg. 1 ¶ 0010 - 0013 and 0015, Pg. 2 ¶ 0026, Pg. 3 ¶ 0030 - 0031, 0036 and 0041 - 0042, Pg. 4 ¶ 0045 - 0047 and 0049) and at least one of blocking or archiving the duplicate image when a duplicate metadata is found between the image and the at least one image archived on the receiving device. (Adams et al., Figs. 3 & 6, Pg. 1 ¶ 0014, Pg. 3 ¶ 0030 and 0041 - 0042, Pg. 4 ¶ 0046 - 0047 and 0049) Cheng et al. and Adams et al. are combinable because they are both directed towards methods and systems of facilitating image storage management as well as managing image duplicates. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Cheng et al. with the teachings of Adams et al. This modification would have been prompted in order to substitute the duplicate image identification technique of Cheng et al. for the duplicate image detection process of Adams et al. The duplicate image detection process of Adams et al. could be substituted in place of the duplicate image identification technique of Cheng et al. utilizing well-known techniques in the art and would likely yield predictable results, in that in the combination the duplicate image detection process based on comparing and identifying duplicate metadata between images of Adams et al. would be utilized to identify duplicate images. In addition, this modification would have been prompted in order to enhance the base device of Cheng et al. with the well-known and applicable technique Adams et al. applied to a comparable device. Comparing metadata of the image to the metadata of at least one archived image and identifying a duplicate image by finding duplicate metadata, as taught by Adams et al., would enhance the base device of Cheng et al. by improving its ability to quickly, efficiently and reliably identify duplicate images since less computational resources are required to compare metadata of images and because metadata comparisons enable comparisons between image files which are in different formats, as taught and suggested by Adams et al., see at least page 1 paragraph 0015 of Adams et al. Furthermore, this modification would have been prompted by the teachings and suggestions of Cheng et al. that image pixel data and/or image metadata can be analyzed to identify image labels and generate functional image signals and scores and that functional image signal prediction can be used for duplicates archiving suggestions, see at least page 3 paragraphs 0024 - 0025, page 4 paragraphs 0035 - 0037, page 6 paragraphs 0049 - 0050, page 7 paragraph 0058 - page 8 paragraph 0065, page 12 paragraphs 0097 - 0098 and page 13 paragraphs 0105 - 0106 and 0109 - 0110 of Cheng et al. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that the duplicate image detection process based on comparing and identifying duplicate metadata between images of Adams et al. would be utilized to identify duplicate images. Therefore, it would have been obvious to combine Cheng et al. with Adams et al. to obtain the invention as specified in claim 1.
- With regards to claim 2, Cheng et al. in view of Adams et al. disclose the method of claim 1, further comprising archiving the image when a duplicate is not found between the image and the at least one archived image. (Cheng et al., Figs. 4 & 5, Pg. 3 ¶ 0023 and 0030 - 0031, Pg. 4 ¶ 0035 - 0037, Pg. 5 ¶ 0039 - 0040 and 0043, Pg. 6 ¶ 0047, Pg. 9 ¶ 0071 - 0075, Pg. 11 ¶ 0086 - 0087, Pg. 16 ¶ 0136 - Pg. 17 ¶ 0139) In addition, analogous art Adams et al. disclose archiving the image when a duplicate is not found between the image and the at least one archived image. (Adams et al., Figs. 3 & 6, Pg. 1 ¶ 0006 - 0008 and 0014, Pg. 3 ¶ 0030 and 0035 - 0041, Pg. 4 ¶ 0045 - 0047 and 0049)
- With regards to claim 3, Cheng et al. in view of Adams et al. disclose the method of claim 1, wherein the method is executed by a processor on a cellular phone, tablet, watch or mobile personal computer. (Cheng et al., Figs. 1 & 4, Pg. 2 ¶ 0012, Pg. 3 ¶ 0030 - 0031, Pg. 5 ¶ 0038 - 0040 and 0042 - 0043, Pg. 6 ¶ 0045, Pg. 9 ¶ 0071, Pg. 11 ¶ 0091 - 0095, Pg. 12 ¶ 0097, Pg. 16 ¶ 0135 - 0137)
- With regards to claim 4, Cheng et al. in view of Adams et al. disclose the method of claim 3, wherein the method is an application on the cellular phone, tablet, watch or mobile personal computer. (Cheng et al., Figs. 1 & 4, Pg. 2 ¶ 0012, Pg. 3 ¶ 0030 - 0031, Pg. 5 ¶ 0038 - 0040 and 0042 - 0043, Pg. 6 ¶ 0045, Pg. 9 ¶ 0071, Pg. 11 ¶ 0091 - 0095, Pg. 12 ¶ 0097, Pg. 16 ¶ 0135 - 0137)
- With regards to claim 5, Cheng et al. in view of Adams et al. disclose the method of claim 1. Cheng et al. fail to disclose expressly wherein the method utilizes metadata to perform the comparisons. Pertaining to analogous art, Adams et al. disclose wherein the method utilizes metadata to perform the comparisons. (Adams et al., Abstract, Figs. 2, 3 & 6, Pg. 1 ¶ 0011 - 0015, Pg. 3 ¶ 0030 and 0041, Pg. 4 ¶ 0046 and 0049)
- With regards to claim 6, Cheng et al. in view of Adams et al. disclose the method of claim 1, wherein the meta data comprises at least one of RGB values, pixel values and average pixel values. (Cheng et al., Pg. 1 ¶ 0007 - 0009, Pg. 3 ¶ 0024, Pg. 6 ¶ 0049 - 0051, Pg. 7 ¶ 0058 - 0059 [“other types of image metadata can be used to determine the functional image score, e.g., EXIF data (describing settings or characteristics of a camera capturing the image), timestamp, etc.”]) In addition, analogous art Adams et al. disclose wherein the meta data comprises at least one of RGB values, pixel values and average pixel values. (Adams et al., Fig. 2, Pg. 1 ¶ 0015, Pg. 2 ¶ 0021, Pg. 3 ¶ 0038 - 0041)
- With regards to claim 11, [As Best Understood by the Examiner] Cheng et al. disclose a system for managing receiving duplicate images on a receiving mobile device of a user, (Cheng et al., Figs. 1, 3 & 4, Pg. 1 ¶ 0004 - 0005, Pg. 3 ¶ 0023 and 0030 - 0031, Pg. 4 ¶ 0033 and 0035, Pg. 4 ¶ 0037 - Pg. 5 ¶ 0040, Pg. 5 ¶ 0042 - 0043, Pg. 6 ¶ 0045 - 0047, Pg. 11 ¶ 0091 - 0093, Pg. 16 ¶ 0135 - 0137) the system comprising: a processor adapted to execute computer instruction; (Cheng et al., Figs. 1 & 4, Pg. 1 ¶ 0005, Pg. 1 ¶ 0010 - Pg. 2 ¶ 0013, Pg. 3 ¶ 0030 - 0031, Pg. 5 ¶ 0038 - 0042, Pg. 6 ¶ 0045, Pg. 9 ¶ 0071 - 0073, Pg. 11 ¶ 0094 - Pg. 12 ¶ 0096, Pg. 13 ¶ 0108, Pg. 14 ¶ 0115 - 0119, Pg. 16 ¶ 0136 - 0137) input/output module for receiving images; (Cheng et al., Figs. 1 - 4, Pg. 1 ¶ 0010, Pg. 2 ¶ 0012 and 0019, Pg. 3 ¶ 0030 - Pg. 4 ¶ 0032, Pg. 5 ¶ 0038 - 0040 and 0042 - 0043, Pg. 6 ¶ 0047, Pg. 8 ¶ 0068 - 0069, Pg. 9 ¶ 0071 - 0073, Pg. 11 ¶ 0091 - Pg. 12 ¶ 0096, Pg. 14 2 0116 - 0119, Pg. 16 ¶ 0136 - 0137) a computer readable medium for archiving at least one of computer instructions and images; (Cheng et al., Figs. 1 & 4, Pg. 1 ¶ 0005, Pg. 1 ¶ 0010 - Pg. 2 ¶ 0013, Pg. 3 ¶ 0023 and 0030 - 0031, Pg. 5 ¶ 0038 - 0042, Pg. 6 ¶ 0045, Pg. 9 ¶ 0071 - 0073, Pg. 11 ¶ 0086, Pg. 11 ¶ 0094 - Pg. 12 ¶ 0096, Pg. 13 ¶ 0108, Pg. 14 ¶ 0115 - 0119, Pg. 16 ¶ 0136 - 0137) a duplicate module coupled to the processor on the receiving device, the input/output module, and the computer readable medium, (Cheng et al., Figs. 1 & 4, Pg. Pg. 1 ¶ 0005, Pg. 1 ¶ 0010 - Pg. 2 ¶ 0013, Pg. 3 ¶ 0030 - 0031, Pg. 4 ¶ 0035 - Pg. 5 ¶ 0042, Pg. 6 ¶ 0045, Pg. 9 ¶ 0071 - 0073, Pg. 11 ¶ 0094 - Pg. 12 ¶ 0096, Pg. 13 ¶ 0108, Pg. 14 ¶ 0115 - 0119, Pg. 16 ¶ 0136 - 0137) wherein the duplicate module compares at least one image sent from a mobile device of another user to at least one of archived images on the receiving mobile device (Cheng et al., Figs. 1 - 3, Pg. 1 ¶ 0004 - 0005, Pg. 2 ¶ 0019, Pg. 3 ¶ 0023 and 0030 - 0031, Pg. 4 ¶ 0033 and 0035, Pg. 4 ¶ 0037 - Pg. 5 ¶ 0040, Pg. 5 ¶ 0042 - 0043, Pg. 6 ¶ 0045 - 0049 and 0051, Pg. 9 ¶ 0071 - 0073, Pg. 11 ¶ 0089 - 0093, Pg. 16 ¶ 0137 [“determine which images among an incoming stream of images are functional in nature and to archive those functional images”, “a multiple-source prediction (e.g., determining a functional image signal based on multiple images)”, “server system 102 and/or one or more client devices 120-126 can provide a functional image archiving program” and “a camera, cell phone, tablet computer, wearable device, or other client device can capture one or more images and can perform the method 200. In addition, or alternatively, a client device can send one or more captured images to a server over a network, and the server can process the images using method 200”]) wherein the archived images were previously received by the receiving mobile device; (Cheng et al., Figs. 1, 3 & 4, Pg. 2 ¶ 0019, Pg. 3 ¶ 0028 - 0031, Pg. 4 ¶ 0033 - 0037, Pg. 5 ¶ 0040 and 0042 - 0043, Pg. 6 ¶ 0047 - 0049, Pg. 9 ¶ 0073 - 0076, Pg. 11 ¶ 0086 - 0089) and utilizing artificial intelligence to learn a user’s behavior (Cheng et al., Pg. 3 ¶ 0024 and 0027 - 0029, Pg. 4 ¶ 0033 - 0037, Pg. 7 ¶ 0052 - 0053, Pg. 11 ¶ 0088 - 0089, Pg. 12 ¶ 0097 - 0098, Pg. 13 ¶ 0109, Pg. 14 ¶ 0120 - 0121) and, according to the user’s behavior of the receiving device with duplicate images, (Cheng et al., Pg. 3 ¶ 0027 - 0029, Pg. 4 ¶ 0033 - 0037, Pg. 7 ¶ 0052 - 0053, Pg. 11 ¶ 0088 - 0089, Pg. 12 ¶ 0097 - 0098, Pg. 13 ¶ 0105 - 0106 and 0109, Pg. 14 ¶ 0120 - 0121, Pg. 15 ¶ 0128) at least one of blocking or archiving the duplicate image based on the behavior of the user (Cheng et al., Pg. 3 ¶ 0027 - 0029, Pg. 4 ¶ 0033 - 0037, Pg. 7 ¶ 0052 - 0053, Pg. 11 ¶ 0088 - 0089, Pg. 12 ¶ 0097 - 0098, Pg. 13 ¶ 0105 - 0106 and 0109, Pg. 14 ¶ 0120 - 0121, Pg. 15 ¶ 0128) of the at least one image from being archived by the receiving mobile device when a duplicate is found between the at least one image and images archived on the receiving mobile device. (Cheng et al., Abstract, Figs. 1 & 3, Pg. 2 ¶ 0020, Pg. 3 ¶ 0023, 0025 and 0030 - 0031, Pg. 4 ¶ 0035, Pg. 4 ¶ 0037 - Pg. 5 ¶ 0043, Pg. 6 ¶ 0045 and 0047, Pg. 8 ¶ 0066, Pg. 9 ¶ 0071 and 0073, Pg. 10 ¶ 0080, Pg. 10 ¶ 0084 - Pg. 11 ¶ 0086, Pg. 11 ¶ 0091 - 0093, Pg. 16 ¶ 0137 [“Archiving of functional images is described herein to help illustrate the disclosed subject matter… It will be appreciated that the disclosed functional image archiving can be applied to images other than functional images, such as poor quality images (e.g., pocket shots, blurry images, poorly exposed images, etc.), near duplicates (e.g., the lowest quality near duplicate images), and gifted images (e.g., memes, etc., which can include images received via a social network, messaging service, or the like)”, “as described below in method 300, actions can be performed on images associated with a functional image signal, including, for example, auto-archiving, archiving based on user input, or deleting the images” (emphasis added), “an auto-archiving system can request that images meeting particular archiving criteria be archived without requiring user input or intervention (e.g., without providing suggestions of block 312)” and “archiving can include moving the image data of the archived images to be stored in an archive storage space of data storage (e.g., a storage area, a storage device, a class of storage such as long term storage, an archive folder, etc. that is different than the storage used for non-archived images), e.g., to reduce storage space usage on user devices and/or server devices”]) Cheng et al. fail to disclose explicitly comparing metadata of at least one image to the metadata of at least one of archived images; and at least one of blocking or archiving when a duplicate metadata is found, i.e., Cheng et al. fail to disclose explicitly identifying a duplicate image by finding duplicate metadata. Pertaining to analogous art, Adams et al. disclose a system for managing receiving duplicate images on a receiving mobile device, (Adams et al., Abstract, Figs. 1 & 3 - 6, Pg. 1 ¶ 0003 - 0007, Pg. 2 ¶ 0026 - 0027, Pg. 3 ¶ 0030 - 0033 and 0038 - 0039, Pg. 3 ¶ 0041 - Pg. 4 ¶ 0043, Pg. 4 ¶ 0045 - 0049) the system comprising: a processor adapted to execute computer instruction; (Adams et al., Figs. 1, 4 & 5, Pg. 2 ¶ 0026 - 0028, Pg. 3 ¶ 0030 - 0032 and 0034, Pg. 4 ¶ 0043 and 0045, Pg. 5 ¶ 0050) input/output module for receiving images; (Adams et al., Figs. 1, 3, 5 & 6, Pg. 3 ¶ 0030 - 0031, 0035 - 0039 and 0041, Pg. 4 ¶ 0045 - 0046 and 0049) a computer readable medium for archiving at least one of computer instructions and images; (Adams et al., Figs. 1 & 5, Pg. 2 ¶ 0026 - 0028, Pg. 3 ¶ 0030 - 0032 and 0035, Pg. 5 ¶ 0050) a duplicate module coupled to the processor on the receiving mobile device, the input/output module, and the computer readable medium, (Adams et al., Figs. 1 & 5, Pg. 2 ¶ 0026 - 0028, Pg. 3 ¶ 0030 - 0032 and 0041, Pg. 4 ¶ 0045, Pg. 5 ¶ 0050 [“image duplication prevention unit (DPU) 108 performs processing for determining, when receiving images from an external device such as a camera or a mobile phone, whether an image has been previously stored in the storage device 106” and “the DPU 108 may alternatively be implemented as a program run by the CPU 101”]) wherein the duplicate module compares metadata of at least one image sent from a mobile device of another user to the metadata of at least one of archived images on the receiving mobile device (Adams et al., Abstract, Figs. 1 & 3 - 6, Pg. 1 ¶ 0005 - 0007, 0010 - 0013 and 0015, Pg. 2 ¶ 0026, Pg. 3 ¶ 0030 - 0031, 0035 - 0039 and 0041, Pg. 4 ¶ 0045 - 0047 and 0049 [“image duplication prevention unit (DPU) 108 performs processing for determining, when receiving images from an external device such as a camera or a mobile phone, whether an image has been previously stored in the storage device 106”]) wherein the archived images were previously received by the receiving mobile device; (Adams et al., Abstract, Figs. 3 & 6, Pg. 1 ¶ 0005 - 0007 and 0012 - 0015, Pg. 3 ¶ 0030, 0032, 0035 - 0039 and 0041 - 0042, Pg. 4 ¶ 0045 - 0047 and 0049) and at least one of blocking or archiving the duplicate image when a duplicate metadata is found between the at least one image and images archived on the receiving mobile device. (Adams et al., Abstract, Figs. 3 & 6, Pg. 1 ¶ 0014, Pg. 3 ¶ 0030 and 0041 - 0042, Pg. 4 ¶ 0045 - 0047 and 0049) Cheng et al. and Adams et al. are combinable because they are both directed towards methods and systems of facilitating image storage management as well as managing image duplicates. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Cheng et al. with the teachings of Adams et al. This modification would have been prompted in order to substitute the duplicate image identification technique of Cheng et al. for the duplicate image detection process of Adams et al. The duplicate image detection process of Adams et al. could be substituted in place of the duplicate image identification technique of Cheng et al. utilizing well-known techniques in the art and would likely yield predictable results, in that in the combination the duplicate image detection process based on comparing and identifying duplicate metadata between images of Adams et al. would be utilized to identify duplicate images. In addition, this modification would have been prompted in order to enhance the base device of Cheng et al. with the well-known and applicable technique Adams et al. applied to a comparable device. Comparing metadata of the image to the metadata of at least one archived image and identifying a duplicate image by finding duplicate metadata, as taught by Adams et al., would enhance the base device of Cheng et al. by improving its ability to quickly, efficiently and reliably identify duplicate images since less computational resources are required to compare metadata of images and because metadata comparisons enable comparisons between image files which are in different formats, as taught and suggested by Adams et al., see at least page 1 paragraph 0015 of Adams et al. Furthermore, this modification would have been prompted by the teachings and suggestions of Cheng et al. that image pixel data and/or image metadata can be analyzed to identify image labels and generate functional image signals and scores and that functional image signal prediction can be used for duplicates archiving suggestions, see at least page 3 paragraphs 0024 - 0025, page 4 paragraphs 0035 - 0037, page 6 paragraphs 0049 - 0050, page 7 paragraph 0058 - page 8 paragraph 0065, page 12 paragraphs 0097 - 0098 and page 13 paragraphs 0105 - 0106 and 0109 - 0110 of Cheng et al. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that the duplicate image detection process based on comparing and identifying duplicate metadata between images of Adams et al. would be utilized to identify duplicate images. Therefore, it would have been obvious to combine Cheng et al. with Adams et al. to obtain the invention as specified in claim 11.
- With regards to claim 12, Cheng et al. in view of Adams et al. disclose the system of claim 11, wherein the duplicate module further archives the at least one image when a duplicate is not found between the at least one image and the images archived on the receiving device and images previously received at the receiving device. (Cheng et al., Figs. 4 & 5, Pg. 3 ¶ 0023 and 0030 - 0031, Pg. 4 ¶ 0035 - 0037, Pg. 5 ¶ 0039 - 0040 and 0043, Pg. 6 ¶ 0047, Pg. 9 ¶ 0071 - 0075, Pg. 11 ¶ 0086 - 0087, Pg. 16 ¶ 0136 - Pg. 17 ¶ 0139) In addition, analogous art Adams et al. disclose wherein the duplicate module further archives the at least one image when a duplicate is not found between the at least one image and the images archived on the receiving device and images previously received at the receiving device. (Adams et al., Figs. 3 & 6, Pg. 1 ¶ 0006 - 0008 and 0014, Pg. 3 ¶ 0030 and 0035 - 0041, Pg. 4 ¶ 0045 - 0047 and 0049)
- With regards to claim 13, Cheng et al. in view of Adams et al. disclose the system of claim 11, wherein the receiving mobile device is a cellular phone, tablet, watch or mobile personal computer. (Cheng et al., Figs. 1 & 4, Pg. 2 ¶ 0012, Pg. 3 ¶ 0030 - 0031, Pg. 5 ¶ 0038 - 0040 and 0042 - 0043, Pg. 6 ¶ 0045, Pg. 9 ¶ 0071, Pg. 11 ¶ 0091 - 0095, Pg. 12 ¶ 0097, Pg. 16 ¶ 0135 - 0137)
- With regards to claim 14, Cheng et al. in view of Adams et al. disclose the system of claim 12, wherein the duplicate module is an application on the receiving mobile device, cellular phone, tablet, watch or mobile personal computer. (Cheng et al., Figs. 1 & 4, Pg. 2 ¶ 0012, Pg. 3 ¶ 0030 - 0031, Pg. 5 ¶ 0038 - 0040 and 0042 - 0043, Pg. 6 ¶ 0045, Pg. 9 ¶ 0071, Pg. 11 ¶ 0091 - 0095, Pg. 12 ¶ 0097, Pg. 16 ¶ 0135 - 0137)
- With regards to claim 15, Cheng et al. in view of Adams et al. disclose the system of claim 11. Cheng et al. fail to disclose expressly wherein the duplicate module utilizes metadata to perform the comparisons. Pertaining to analogous art, Adams et al. disclose wherein the duplicate module utilizes metadata to perform the comparisons. (Adams et al., Abstract, Figs. 2, 3 & 6, Pg. 1 ¶ 0011 - 0015, Pg. 3 ¶ 0030 and 0041, Pg. 4 ¶ 0046 and 0049)
- With regards to claim 16, Cheng et al. in view of Adams et al. disclose the system of claim 11, wherein the meta data comprises at least one of RGB values, pixel values and average pixel values. (Cheng et al., Pg. 1 ¶ 0007 - 0009, Pg. 3 ¶ 0024, Pg. 6 ¶ 0049 - 0051, Pg. 7 ¶ 0058 - 0059 [“other types of image metadata can be used to determine the functional image score, e.g., EXIF data (describing settings or characteristics of a camera capturing the image), timestamp, etc.”]) In addition, analogous art Adams et al. disclose wherein the meta data comprises at least one of RGB values, pixel values and average pixel values. (Adams et al., Fig. 2, Pg. 1 ¶ 0015, Pg. 2 ¶ 0021, Pg. 3 ¶ 0038 - 0041)
- With regards to claim 21, Cheng et al. disclose an apparatus for managing receiving duplicate images, (Cheng et al., Figs. 1 & 4, Pg. 3 ¶ 0030 - 0031, Pg. 4 ¶ 0033 and 0035, Pg. 4 ¶ 0037 - Pg. 5 ¶ 0040, Pg. 6 ¶ 0045, Pg. 11 ¶ 0091 - 0095) the apparatus comprising: an image archive for storing images on a receiving mobile device; (Cheng et al., Figs. 1, 3 & 4, Pg. 3 ¶ 0023 and 0030 - 0031, Pg. 5 ¶ 0038 - 0040 and 0042 - 0043, Pg. 11 ¶ 0086 and 0091 - 0093, Pg. 16 ¶ 0137) a comparator of the receiving mobile device coupled to the image archive (Cheng et al., Figs. 1 & 4, Pg. Pg. 1 ¶ 0005, Pg. 1 ¶ 0010 - Pg. 2 ¶ 0013, Pg. 3 ¶ 0030 - 0031, Pg. 4 ¶ 0035 - Pg. 5 ¶ 0042, Pg. 6 ¶ 0045, Pg. 9 ¶ 0071 - 0073, Pg. 11 ¶ 0094 - Pg. 12 ¶ 0096, Pg. 13 ¶ 0108, Pg. 14 ¶ 0115 - 0119, Pg. 16 ¶ 0136 - 0137) and adapted to compare at least one image from at least one of a sending mobile device or multiple sending mobile devices (Cheng et al., Fig. 1, Pg. 3 ¶ 0030 - 0031, Pg. 5 ¶ 0038 - 0040 and 0042 - 0043, Pg. 6 ¶ 0045 - 0049, Pg. 9 ¶ 0073, Pg. 11 ¶ 0091 - 0093, Pg. 16 ¶ 0137) to at least one of a previously received image to the receiving mobile device (Cheng et al., Figs. 1 - 4, Pg. 1 ¶ 0004 - 0005, Pg. 2 ¶ 0019, Pg. 3 ¶ 0023 and 0028 - 0031, Pg. 4 ¶ 0033 - 0037, Pg. 5 ¶ 0038 - 0040 and 0041 - 0042, Pg. 6 ¶ 0044 - 0047, 0049 and 0051, Pg. 9 ¶ 0071 - 0076, Pg. 11 ¶ 0086 - 0093, Pg. 16 ¶ 0137 [“determine which images among an incoming stream of images are functional in nature and to archive those functional images”, “a multiple-source prediction (e.g., determining a functional image signal based on multiple images)”, “server system 102 and/or one or more client devices 120-126 can provide a functional image archiving program” and “a camera, cell phone, tablet computer, wearable device, or other client device can capture one or more images and can perform the method 200. In addition, or alternatively, a client device can send one or more captured images to a server over a network, and the server can process the images using method 200”]) and determining if the at least one image received by the receiving device is a duplicate image; (Cheng et al., Fig. 3, Pg. 1 ¶ 0004 - 0005, Pg. 3 ¶ 0023 and 0030 - 0031, Pg. 4 ¶ 0033, 0035 and 0037, Pg. 6 ¶ 0044 - 0047) and a blocker coupled to the comparator, (Cheng et al., Figs. 1, 3 & 4, Pg. 1 ¶ 0010, Pg. 3 ¶ 0030 - 0031, Pg. 4 ¶ 0033, 0035 and 0037, Pg. 5 ¶ 0042, Pg. 6 ¶ 0045, Pg. 11 ¶ 0091 - Pg. 12 ¶ 0096) wherein the blocker utilizes artificial intelligence to learn a user’s behavior (Cheng et al., Pg. 3 ¶ 0024 and 0027 - 0029, Pg. 4 ¶ 0033 - 0037, Pg. 7 ¶ 0052 - 0053, Pg. 11 ¶ 0088 - 0089, Pg. 12 ¶ 0097 - 0098, Pg. 13 ¶ 0109, Pg. 14 ¶ 0120 - 0121) and, according to the user’s behavior of the receiving device with duplicate images, (Cheng et al., Pg. 3 ¶ 0027 - 0029, Pg. 4 ¶ 0033 - 0037, Pg. 7 ¶ 0052 - 0053, Pg. 11 ¶ 0088 - 0089, Pg. 12 ¶ 0097 - 0098, Pg. 13 ¶ 0105 - 0106 and 0109, Pg. 14 ¶ 0120 - 0121, Pg. 15 ¶ 0128) blocks a user from viewing the image when the comparator finds a duplicate by at least one of blocking or arching the duplicate image (Cheng et al., Abstract, Figs. 1 & 3, Pg. 2 ¶ 0020, Pg. 3 ¶ 0023, 0025 and 0030 - 0031, Pg. 4 ¶ 0035, Pg. 4 ¶ 0037 - Pg. 5 ¶ 0043, Pg. 6 ¶ 0045 and 0047, Pg. 8 ¶ 0066, Pg. 9 ¶ 0071 and 0073, Pg. 10 ¶ 0080, Pg. 10 ¶ 0084 - Pg. 11 ¶ 0086, Pg. 11 ¶ 0091 - 0093, Pg. 16 ¶ 0137 [“Archiving of functional images is described herein to help illustrate the disclosed subject matter… It will be appreciated that the disclosed functional image archiving can be applied to images other than functional images, such as poor quality images (e.g., pocket shots, blurry images, poorly exposed images, etc.), near duplicates (e.g., the lowest quality near duplicate images), and gifted images (e.g., memes, etc., which can include images received via a social network, messaging service, or the like)”, “as described below in method 300, actions can be performed on images associated with a functional image signal, including, for example, auto-archiving, archiving based on user input, or deleting the images” (emphasis added), “an auto-archiving system can request that images meeting particular archiving criteria be archived without requiring user input or intervention (e.g., without providing suggestions of block 312)” and “archiving can include moving the image data of the archived images to be stored in an archive storage space of data storage (e.g., a storage area, a storage device, a class of storage such as long term storage, an archive folder, etc. that is different than the storage used for non-archived images), e.g., to reduce storage space usage on user devices and/or server devices”]) based on the behavior of the user of the receiving device. (Cheng et al., Pg. 3 ¶ 0027 - 0029, Pg. 4 ¶ 0033 - 0037, Pg. 7 ¶ 0052 - 0053, Pg. 11 ¶ 0088 - 0089, Pg. 12 ¶ 0097 - 0098, Pg. 13 ¶ 0105 - 0106 and 0109, Pg. 14 ¶ 0120 - 0121, Pg. 15 ¶ 0128) Cheng et al. fail to disclose explicitly comparing metadata of at least one image to the metadata of at least one of a previously received image; and blocking a user from viewing the image when a duplicate metadata is found, i.e., Cheng et al. fail to disclose explicitly identifying a duplicate image by finding duplicate metadata. Pertaining to analogous art, Adams et al. disclose an apparatus for managing receiving duplicate images, (Adams et al., Abstract, Figs. 1 & 3 - 6, Pg. 1 ¶ 0003 and 0007, Pg. 2 ¶ 0026 - 0027, Pg. 3 ¶ 0030 and 0041) the apparatus comprising: an image archive for storing images on a receiving device; (Adams et al., Figs. 1 & 5, Pg. 2 ¶ 0026 - 0028, Pg. 3 ¶ 0030 and 0035 - 0041, Pg. 4 ¶ 0045 - 0047 and 0049) a comparator of the receiving device coupled to the image archive (Adams et al., Figs. 1 & 5, Pg. 2 ¶ 0026 - 0028, Pg. 3 ¶ 0030 - 0032 and 0041, Pg. 4 ¶ 0045, Pg. 5 ¶ 0050 [“image duplication prevention unit (DPU) 108 performs processing for determining, when receiving images from an external device such as a camera or a mobile phone, whether an image has been previously stored in the storage device 106” and “the DPU 108 may alternatively be implemented as a program run by the CPU 101”]) and adapted to compare metadata of at least one image from at least one of a sending mobile device or multiple sending mobile devices to the metadata of at least one of a previously received image to the receiving device (Adams et al., Abstract, Figs. 3 & 6, Pg. 1 ¶ 0010 - 0013 and 0015, Pg. 2 ¶ 0026, Pg. 3 ¶ 0030 - 0031, 0036 - 039 and 0041 - 0042, Pg. 4 ¶ 0045 - 0047 and 0049) and determining if the at least one image received by the receiving device is a duplicate image; (Adams et al., Pg. 1 ¶ 0014 - 0015, Pg. 3 ¶ 0030 and 0041 - 0042, Pg. 4 ¶ 0046 - 0047 and 0049) and a blocker coupled to the comparator, (Adams et al., Figs. 1 & 5, Pg. 2 ¶ 0026 - 0028, Pg. 3 ¶ 0030 - 0032 and 0041 - 0042, Pg. 4 ¶ 0045 - 0047, Pg. 4 ¶ 0049 - Pg. 5 ¶ 0050) wherein the blocker blocks a user from viewing the image when the comparator finds a duplicate metadata by at least one of blocking or archiving the duplicate image. (Adams et al., Figs. 3 & 6, Pg. 1 ¶ 0014, Pg. 3 ¶ 0030 and 0041 - 0042, Pg. 4 ¶ 0045 - 0047 and 0049) Cheng et al. and Adams et al. are combinable because they are both directed towards methods and systems of facilitating image storage management as well as managing image duplicates. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Cheng et al. with the teachings of Adams et al. This modification would have been prompted in order to substitute the duplicate image identification technique of Cheng et al. for the duplicate image detection process of Adams et al. The duplicate image detection process of Adams et al. could be substituted in place of the duplicate image identification technique of Cheng et al. utilizing well-known techniques in the art and would likely yield predictable results, in that in the combination the duplicate image detection process based on comparing and identifying duplicate metadata between images of Adams et al. would be utilized to identify duplicate images. In addition, this modification would have been prompted in order to enhance the base device of Cheng et al. with the well-known and applicable technique Adams et al. applied to a comparable device. Comparing metadata of the image to the metadata of at least one archived image and identifying a duplicate image by finding duplicate metadata, as taught by Adams et al., would enhance the base device of Cheng et al. by improving its ability to quickly, efficiently and reliably identify duplicate images since less computational resources are required to compare metadata of images and because metadata comparisons enable comparisons between image files which are in different formats, as taught and suggested by Adams et al., see at least page 1 paragraph 0015 of Adams et al. Furthermore, this modification would have been prompted by the teachings and suggestions of Cheng et al. that image pixel data and/or image metadata can be analyzed to identify image labels and generate functional image signals and scores and that functional image signal prediction can be used for duplicates archiving suggestions, see at least page 3 paragraphs 0024 - 0025, page 4 paragraphs 0035 - 0037, page 6 paragraphs 0049 - 0050, page 7 paragraph 0058 - page 8 paragraph 0065, page 12 paragraphs 0097 - 0098 and page 13 paragraphs 0105 - 0106 and 0109 - 0110 of Cheng et al. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that the duplicate image detection process based on comparing and identifying duplicate metadata between images of Adams et al. would be utilized to identify duplicate images. Therefore, it would have been obvious to combine Cheng et al. with Adams et al. to obtain the invention as specified in claim 21.
- With regards to claim 22, [As Best Understood by the Examiner] Cheng et al. in view of Adams et al. disclose the apparatus of claim 21, further comprising an archiver (Cheng et al., Figs. 1, 3 & 4, Pg. 3 ¶ 0023 and 0030 - 0031, Pg. 5 ¶ 0039 - 0040 and 0043, Pg. 11 ¶ 0086) adapted to archive the at least one image from the at least one of a sending mobile device or the multiple sending mobile devices in a designated location, (Cheng et al., Abstract, Figs. 1 & 3, Pg. 2 ¶ 0019, Pg. 3 ¶ 0023 and 0030 - 0031, Pg. 4 ¶ 0035, Pg. 4 ¶ 0037 - Pg. 5 ¶ 0040, Pg. 5 ¶ 0042 - 0043, Pg. 6 ¶ 0045 and 0047, Pg. 8 ¶ 0066, Pg. 9 ¶ 0071 and 0073, Pg. 10 ¶ 0080, Pg. 11 ¶ 0086 and 0091 - 0093, Pg. 16 ¶ 0137 [“Archiving of functional images is described herein to help illustrate the disclosed subject matter… It will be appreciated that the disclosed functional image archiving can be applied to images other than functional images, such as poor quality images (e.g., pocket shots, blurry images, poorly exposed images, etc.), near duplicates (e.g., the lowest quality near duplicate images), and gifted images (e.g., memes, etc., which can include images received via a social network, messaging service, or the like)” and “archiving can include moving the image data of the archived images to be stored in an archive storage space of data storage (e.g., a storage area, a storage device, a class of storage such as long term storage, an archive folder, etc. that is different than the storage used for non-archived images), e.g., to reduce storage space usage on user devices and/or server devices”]) wherein the designated location is designated by the artificial intelligence based on user behavior to archive duplicate received images (Cheng et al., Abstract, Fig. 3, Pg. 3 ¶ 0023 and 0027 - 0029, Pg. 4 ¶ 0034 - 0037, Pg. 10 ¶ 0080, Pg. 10 ¶ 0084 - Pg. 11 ¶ 0086, Pg. 11 ¶ 0088 - 0089, Pg. 14 ¶ 0120 - 0121 [“Archiving of functional images is described herein to help illustrate the disclosed subject matter… It will be appreciated that the disclosed functional image archiving can be applied to images other than functional images, such as poor quality images (e.g., pocket shots, blurry images, poorly exposed images, etc.), near duplicates (e.g., the lowest quality near duplicate images), and gifted images (e.g., memes, etc., which can include images received via a social network, messaging service, or the like)” and “archiving can include moving the image data of the archived images to be stored in an archive storage space of data storage (e.g., a storage area, a storage device, a class of storage such as long term storage, an archive folder, etc. that is different than the storage used for non-archived images), e.g., to reduce storage space usage on user devices and/or server devices”]) when the at least one received image is determined to be a duplicate image. (Cheng et al., Abstract, Fig. 3, Pg. 1 ¶ 0004 - 0005, Pg. 3 ¶ 0023 and 0030 - 0031, Pg. 4 ¶ 0033, 0035 and 0037, Pg. 6 ¶ 0044 - 0047, Pg. 11 ¶ 0087)
- With regards to claim 23, Cheng et al. in view of Adams et al. disclose the apparatus of claim 21, wherein the apparatus is coupled to a mobile device, cellular phone, tablet, watch or mobile personal computer. (Cheng et al., Figs. 1 & 4, Pg. 2 ¶ 0012, Pg. 3 ¶ 0030 - 0031, Pg. 5 ¶ 0038 - 0040 and 0042 - 0043, Pg. 6 ¶ 0045, Pg. 9 ¶ 0071, Pg. 11 ¶ 0091 - 0095, Pg. 12 ¶ 0097, Pg. 16 ¶ 0135 - 0137)
- With regards to claim 24, Cheng et al. in view of Adams et al. disclose the apparatus of claim 23, wherein the apparatus is coupled to a mobile device, cellular phone, tablet, watch or mobile personal computer comprising an application. (Cheng et al., Figs. 1 & 4, Pg. 2 ¶ 0012, Pg. 3 ¶ 0030 - 0031, Pg. 5 ¶ 0038 - 0040 and 0042 - 0043, Pg. 6 ¶ 0045, Pg. 9 ¶ 0071, Pg. 11 ¶ 0091 - 0095, Pg. 12 ¶ 0097, Pg. 16 ¶ 0135 - 0137)
- With regards to claim 25, Cheng et al. in view of Adams et al. disclose the apparatus of claim 21. Cheng et al. fail to disclose expressly wherein the comparator utilizes metadata to perform the comparisons. Pertaining to analogous art, Adams et al. disclose wherein the comparator utilizes metadata to perform the comparisons. (Adams et al., Abstract, Figs. 2, 3 & 6, Pg. 1 ¶ 0011 - 0015, Pg. 3 ¶ 0030 and 0041, Pg. 4 ¶ 0046 and 0049)
- With regards to claim 26, Cheng et al. in view of Adams et al. disclose the apparatus of claim 21, wherein the meta data comprises at least one of RGB values, pixel values and average pixel values. (Cheng et al., Pg. 1 ¶ 0007 - 0009, Pg. 3 ¶ 0024, Pg. 6 ¶ 0049 - 0051, Pg. 7 ¶ 0058 - 0059 [“other types of image metadata can be used to determine the functional image score, e.g., EXIF data (describing settings or characteristics of a camera capturing the image), timestamp, etc.”]) In addition, analogous art Adams et al. disclose wherein the meta data comprises at least one of RGB values, pixel values and average pixel values. (Adams et al., Fig. 2, Pg. 1 ¶ 0015, Pg. 2 ¶ 0021, Pg. 3 ¶ 0038 - 0041)
- With regards to claim 27, Cheng et al. in view of Adams et al. disclose the apparatus of claim 21, wherein the blocker can be enabled or disabled by a user of the receiving mobile device. (Cheng et al., Figs. 1 - 3, Pg. 3 ¶ 0023 and 0030 - 0031, Pg. 4 ¶ 0035 - Pg. 5 ¶ 0040, Pg. 8 ¶ 0061, Pg. 9 ¶ 0074, Pg. 10 ¶ 0081 and 0085, Pg. 11 ¶ 0087 - 0089 and 0091 - 0093, Pg. 12 ¶ 0097, Pg. 15 ¶ 0124, Pg. 15 ¶ 0129 - Pg. 16 ¶ 0131, Pg. 16 ¶ 0137 [“In one or more implementations described herein, archived images are hidden from display in displayed views of non-archived images (unless user input is received that instructs archived images to be displayed in such a view)”])
- With regards to claim 30, [As Best Understood by the Examiner] Cheng et al. in view of Adams et al. disclose the apparatus of claim 21, wherein blocked images are archived in a designated location. (Cheng et al., Abstract, Fig. 3, Pg. 3 ¶ 0023, Pg. 4 ¶ 0035 and 0037, Pg. 11 ¶ 0086 [“Archiving of functional images is described herein to help illustrate the disclosed subject matter… It will be appreciated that the disclosed functional image archiving can be applied to images other than functional images, such as poor quality images (e.g., pocket shots, blurry images, poorly exposed images, etc.), near duplicates (e.g., the lowest quality near duplicate images), and gifted images (e.g., memes, etc., which can include images received via a social network, messaging service, or the like)” and “archiving can include moving the image data of the archived images to be stored in an archive storage space of data storage (e.g., a storage area, a storage device, a class of storage such as long term storage, an archive folder, etc. that is different than the storage used for non-archived images), e.g., to reduce storage space usage on user devices and/or server devices”])
- With regards to claim 31, [As Best Understood by the Examiner] Cheng et al. disclose a method for managing duplicate images, (Cheng et al., Fig. 3, Pg. 1 ¶ 0004 - 0005, Pg. 3 ¶ 0023 and 0030 - 0031, Pg. 4 ¶ 0033, 0035 and 0037, Pg. 6 ¶ 0044 - 0047) the method comprising the steps of: capturing two or more images from a burst image capture on a sending mobile device or multiple sending mobile devices of a user (Cheng et al., Figs. 1 - 3, Pg. 2 ¶ 0019, Pg. 3 ¶ 0023 and 0030 - 0031, Pg. 4 ¶ 0035, Pg. 4 ¶ 0037 - Pg. 5 ¶ 0040, Pg. 5 ¶ 0042 - 0043, Pg. 6 ¶ 0045 - 0049, Pg. 9 ¶ 0073, Pg. 11 ¶ 0091 - 0093, Pg. 16 ¶ 0137 [“Multiple-source functional image signal prediction can be used, for example, for worse duplicates archiving suggestions (e.g., recommending lower quality duplicate or near-duplicate images for archiving, such as images captured in a burst or near in time at the same location)”]) and sending the two or more images to a receiving mobile device of another user; (Cheng et al., Figs. 1 - 3, Pg. 2 ¶ 0019, Pg. 3 ¶ 0030 - 0031, Pg. 4 ¶ 0035, Pg. 4 ¶ 0037 - Pg. 5 ¶ 0040, Pg. 5 ¶ 0042 - 0043, Pg. 6 ¶ 0045, 0047 and 0049, Pg. 9 ¶ 0071 - 0073, Pg. 11 ¶ 0089 - 0093, Pg. 16 ¶ 0137 [“determine which images among an incoming stream of images are functional in nature and to archive those functional images”, “server system 102 and/or one or more client devices 120-126 can provide a functional image archiving program” and “a camera, cell phone, tablet computer, wearable device, or other client device can capture one or more images and can perform the method 300. In addition, or alternatively, a client device can send one or more captured images to a server over a network, and the server can process the images using method 300”]) comparing two or more images captured in the burst image capture for duplication on the receiving mobile device; (Cheng et al., Figs. 1 - 3, Pg. 1 ¶ 0004 - 0005, Pg. 3 ¶ 0023 and 0030 - 0031, Pg. 4 ¶ 0033, 0035 and 0037, Pg. 5 ¶ 0038 - 0040 and 0042, Pg. 6 ¶ 0044 - 0047 and 0051, Pg. 9 ¶ 0071 - 0073, Pg. 11 ¶ 0086 and 0091 - 0093, Pg. 16 ¶ 0137 [“a multiple-source prediction (e.g., determining a functional image signal based on multiple images)”]) and utilizing artificial intelligence to learn a user’s behavior (Cheng et al., Pg. 3 ¶ 0024 and 0027 - 0029, Pg. 4 ¶ 0033 - 0037, Pg. 7 ¶ 0052 - 0053, Pg. 11 ¶ 0088 - 0089, Pg. 12 ¶ 0097 - 0098, Pg. 13 ¶ 0109, Pg. 14 ¶ 0120 - 0121) and, according to the user’s behavior of the receiving device with duplicate images, (Cheng et al., Pg. 3 ¶ 0027 - 0029, Pg. 4 ¶ 0033 - 0037, Pg. 7 ¶ 0052 - 0053, Pg. 11 ¶ 0088 - 0089, Pg. 12 ¶ 0097 - 0098, Pg. 13 ¶ 0105 - 0106 and 0109, Pg. 14 ¶ 0120 - 0121, Pg. 15 ¶ 0128) at least one of blocking or archiving the duplicate image based on the behavior of the user of the receiving mobile device from archiving at least one received image when a duplicate is found on the receiving mobile device. (Cheng et al., Abstract, Figs. 1 & 3, Pg. 2 ¶ 0020, Pg. 3 ¶ 0023, 0025 and 0030 - 0031, Pg. 4 ¶ 0035, Pg. 4 ¶ 0037 - Pg. 5 ¶ 0043, Pg. 6 ¶ 0045 and 0047, Pg. 8 ¶ 0066, Pg. 9 ¶ 0071 and 0073, Pg. 10 ¶ 0080, Pg. 10 ¶ 0084 - Pg. 11 ¶ 0086, Pg. 11 ¶ 0091 - 0093, Pg. 16 ¶ 0137 [“Archiving of functional images is described herein to help illustrate the disclosed subject matter… It will be appreciated that the disclosed functional image archiving can be applied to images other than functional images, such as poor quality images (e.g., pocket shots, blurry images, poorly exposed images, etc.), near duplicates (e.g., the lowest quality near duplicate images), and gifted images (e.g., memes, etc., which can include images received via a social network, messaging service, or the like)”, “as described below in method 300, actions can be performed on images associated with a functional image signal, including, for example, auto-archiving, archiving based on user input, or deleting the images” (emphasis added), “an auto-archiving system can request that images meeting particular archiving criteria be archived without requiring user input or intervention (e.g., without providing suggestions of block 312)” and “archiving can include moving the image data of the archived images to be stored in an archive storage space of data storage (e.g., a storage area, a storage device, a class of storage such as long term storage, an archive folder, etc. that is different than the storage used for non-archived images), e.g., to reduce storage space usage on user devices and/or server devices”]) Cheng et al. fail to disclose explicitly comparing metadata of two or more images; and at least one of blocking or archiving when a duplicate metadata is found, i.e., Cheng et al. fail to disclose explicitly identifying a duplicate image by finding duplicate metadata. Pertaining to analogous art, Adams et al. disclose a method for managing duplicate images, (Adams et al., Abstract, Figs. 3 & 6, Pg. 1 ¶ 0003 and 0007, Pg. 3 ¶ 0030 and 0040 - 0041) the method comprising the steps of: capturing two or more images from an image capture on a sending mobile device or multiple sending mobile devices of a user and sending the two or more images to a receiving mobile device; (Adams et al., Figs. 1 & 3 - 6, Pg. 1 ¶ 0005 - 0008 and 0015, Pg. 2 ¶ 0026, Pg. 3 ¶ 0030, 0036 - 0039 and 0041 - 0042, Pg. 4 ¶ 0045 - 0047 and 0049) comparing metadata of two or more images captured in the image capture for duplication on the receiving mobile device; (Adams et al., Abstract, Figs. 3 & 6, Pg. 1 ¶ 0012 - 0016, Pg. 2 ¶ 0026, Pg. 3 ¶ 0030, 0033 and 0041 - 0042, Pg. 4 ¶ 0045 - 0047 and 0049) and at least one of blocking or archiving the duplicate image the receiving mobile device from archiving at least one received image when a duplicate metadata is found on the receiving mobile device. (Adams et al., Figs. 3 & 6, Pg. 1 ¶ 0014, Pg. 3 ¶ 0030 and 0041 - 0042, Pg. 4 ¶ 0045 - 0047 and 0049) Cheng et al. and Adams et al. are combinable because they are both directed towards methods and systems of facilitating image storage management as well as managing image duplicates. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Cheng et al. with the teachings of Adams et al. This modification would have been prompted in order to substitute the duplicate image identification technique of Cheng et al. for the duplicate image detection process of Adams et al. The duplicate image detection process of Adams et al. could be substituted in place of the duplicate image identification technique of Cheng et al. utilizing well-known techniques in the art and would likely yield predictable results, in that in the combination the duplicate image detection process based on comparing and identifying duplicate metadata between images of Adams et al. would be utilized to identify duplicate images. In addition, this modification would have been prompted in order to enhance the base device of Cheng et al. with the well-known and applicable technique Adams et al. applied to a comparable device. Comparing metadata of the image to the metadata of at least one archived image and identifying a duplicate image by finding duplicate metadata, as taught by Adams et al., would enhance the base device of Cheng et al. by improving its ability to quickly, efficiently and reliably identify duplicate images since less computational resources are required to compare metadata of images and because metadata comparisons enable comparisons between image files which are in different formats, as taught and suggested by Adams et al., see at least page 1 paragraph 0015 of Adams et al. Furthermore, this modification would have been prompted by the teachings and suggestions of Cheng et al. that image pixel data and/or image metadata can be analyzed to identify image labels and generate functional image signals and scores and that functional image signal prediction can be used for duplicates archiving suggestions, see at least page 3 paragraphs 0024 - 0025, page 4 paragraphs 0035 - 0037, page 6 paragraphs 0049 - 0050, page 7 paragraph 0058 - page 8 paragraph 0065, page 12 paragraphs 0097 - 0098 and page 13 paragraphs 0105 - 0106 and 0109 - 0110 of Cheng et al. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that the duplicate image detection process based on comparing and identifying duplicate metadata between images of Adams et al. would be utilized to identify duplicate images. Therefore, it would have been obvious to combine Cheng et al. with Adams et al. to obtain the invention as specified in claim 31.
Claims 7 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Cheng et al. U.S. Publication No. 2019/0197364 A1 in view of Adams et al. U.S. Publication No. 2014/0081926 A1 as applied to claims 1 and 11 above, and further in view of Kakutani U.S. Publication No. 2012/0188581 A1.
- With regards to claim 7, Cheng et al. in view of Adams et al. disclose the method of claim 1. Cheng et al. fail to disclose expressly wherein the method can be enabled or disabled by a user. Pertaining to analogous art, Kakutani discloses wherein the method can be enabled or disabled by a user. (Kakutani, Figs. 2, 3, 8 & 11, Pg. 3 ¶ 0053 - 0054, Pg. 4 ¶ 0065 - 0072, Pg. 4 ¶ 0075 - Pg. 5 ¶ 0078, Pg. 5 ¶ 0084, Pg. 7 ¶ 0116 - 0117) Cheng et al. in view of Adams et al. and Kakutani are combinable because they are all directed towards methods and systems of managing image data. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Cheng et al. in view of Adams et al. with the teachings of Kakutani. This modification would have been prompted in order to enhance the combined base device of Cheng et al. in view of Adams et al. with the well-known and applicable technique Kakutani applied to a comparable device. Providing users with the ability to selectively enable or disable implementing a function, as taught by Kakutani, would enhance the combined base device by giving users more control over the functions of the combined base device as well as by allowing for computational resources to be conserved in situations wherein users do not mind if duplicate images are uploaded. Furthermore, this modification would have been prompted by the teachings and suggestions of Cheng et al. that the blocking of duplicate images can be selectively enabled or disabled by users, see at least figure 3, page 3 paragraph 0023, page 10 paragraphs 0081 and 0085 and page 11 paragraphs 0087 and 0089 of Cheng et al. Moreover, this modification would have been prompted by the teachings and suggestions of Adams et al. that their invention may be implemented as a program run by a CPU, see at least page 2 paragraphs 0026 - 0027, page 3 paragraph 0030 and page 5 paragraph 0050 of Adams et al., since it is notoriously well-known that users have the ability to decide whether or not to run, activate, programs on their computers. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that users would be provided with the ability to selectively enable or disable implementing the process of the combined base device so as to give users more control over the combined base device and enable computational resources to be conserved in situations wherein users do not want the process of the combined base device to be carried out. Therefore, it would have been obvious to combine Cheng et al. in view of Adams et al. with Kakutani to obtain the invention as specified in claim 7.
- With regards to claim 17, Cheng et al. in view of Adams et al. disclose the system of claim 11, wherein the blocking or archiving can be enabled or disabled by the user of the receiving mobile device. (Cheng et al., Figs. 1 - 3, Pg. 3 ¶ 0023 and 0030 - 0031, Pg. 4 ¶ 0035 - Pg. 5 ¶ 0040, Pg. 8 ¶ 0061, Pg. 9 ¶ 0074, Pg. 10 ¶ 0081 and 0085, Pg. 11 ¶ 0087 - 0089 and 0091 - 0093, Pg. 12 ¶ 0097, Pg. 15 ¶ 0124, Pg. 15 ¶ 0129 - Pg. 16 ¶ 0131, Pg. 16 ¶ 0137 [“In one or more implementations described herein, archived images are hidden from display in displayed views of non-archived images (unless user input is received that instructs archived images to be displayed in such a view)”]) Cheng et al. fail to disclose expressly wherein the duplicate module can be enabled or disabled by the user. Pertaining to analogous art, Kakutani discloses wherein the duplicate module can be enabled or disabled by the user. (Kakutani, Figs. 2, 3, 8 & 11, Pg. 3 ¶ 0053 - 0054, Pg. 4 ¶ 0065 - 0072, Pg. 4 ¶ 0075 - Pg. 5 ¶ 0078, Pg. 5 ¶ 0084, Pg. 7 ¶ 0116 - 0117) Cheng et al. in view of Adams et al. and Kakutani are combinable because they are all directed towards methods and systems of managing image data. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Cheng et al. in view of Adams et al. with the teachings of Kakutani. This modification would have been prompted in order to enhance the combined base device of Cheng et al. in view of Adams et al. with the well-known and applicable technique Kakutani applied to a comparable device. Providing users with the ability to selectively enable or disable implementing a function, as taught by Kakutani, would enhance the combined base device by giving users more control over the functions of the combined base device as well as by allowing for computational resources to be conserved in situations wherein users do not mind if duplicate images are uploaded. Furthermore, this modification would have been prompted by the teachings and suggestions of Cheng et al. that the blocking of duplicate images can be selectively enabled or disabled by users of receiving mobile devices, see at least figures 1 - 3, page 3 paragraphs 0023 and 0030 - 0031, page 4 paragraphs 0038 - 0039, page 10 paragraphs 0081 and 0085, page 11 paragraphs 0087 and 0089 - 0093 and page 16 paragraph 0137 of Cheng et al. Moreover, this modification would have been prompted by the teachings and suggestions of Adams et al. that their invention may be implemented as a program run by a CPU, see at least page 2 paragraphs 0026 - 0027, page 3 paragraph 0030 and page 5 paragraph 0050 of Adams et al., since it is notoriously well-known that users have the ability to decide whether or not to run, activate, programs on their computers. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that users of receiving mobile devices would be provided with the ability to selectively enable or disable implementing the process of the combined base device so as to give users more control over the combined base device and enable computational resources to be conserved in situations wherein users do not want the process of the combined base device to be carried out. Therefore, it would have been obvious to combine Cheng et al. in view of Adams et al. with Kakutani to obtain the invention as specified in claim 17.
Claims 8 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Cheng et al. U.S. Publication No. 2019/0197364 A1 in view of Adams et al. U.S. Publication No. 2014/0081926 A1 in view of Kakutani U.S. Publication No. 2012/0188581 A1 as applied to claims 7 and 17 above, and further in view of Nakamura U.S. Publication No. 2017/0295234 A1.
- With regards to claim 8, Cheng et al. in view of Adams et al. in view of Kakutani disclose the method of claim 7. Cheng et al. fail to disclose explicitly wherein a user’s setting synchronizes on other user’s mobile devices. Pertaining to analogous art, Nakamura discloses wherein a user’s setting synchronizes on other user’s mobile devices. (Nakamura, Figs. 4 - 6, 13A & 13B, Pg. 2 ¶ 0030 - 0032, Pg. 3 ¶ 0041 and 0044, Pg. 4 ¶ 0048 [The Examiner asserts that in the proposed combination of Cheng et al. in view of Adams et al. in view of Kakutani further in view of Nakamura, the process of synchronizing a user’s setting on other user’s devices of Nakamura would be utilized to synchronize settings on the mobile devices of Cheng et al.]) Cheng et al. in view of Adams et al. in view of Kakutani and Nakamura are combinable because they are all directed towards methods and systems of managing image data. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Cheng et al. in view of Adams et al. in view of Kakutani with the teachings of Nakamura. This modification would have been prompted in order to enhance the combined base device of Cheng et al. in view of Adams et al. in view of Kakutani with the well-known technique Nakamura applied to a comparable device. Synchronizing a user’s settings on other user’s devices, as taught by Nakamura, would enhance the combined base device by making it more user friendly and easy to use since user settings would be synchronized across their devices thereby helping to ensure that no matter which device a user is utilizing it is configured to that user’s preferences. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that a user’s settings would be synchronized across their devices so as to ensure that the combined base device is as user friendly as possible. Therefore, it would have been obvious to combine Cheng et al. in view of Adams et al. in view of Kakutani with Nakamura to obtain the invention as specified in claim 8.
- With regards to claim 18, Cheng et al. in view of Adams et al. in view of Kakutani disclose the system of claim 17. Cheng et al. fail to disclose explicitly wherein a user’s setting synchronizes on other user’s mobile devices. Pertaining to analogous art, Nakamura discloses wherein a user’s setting synchronizes on other user’s mobile devices. (Nakamura, Figs. 4 - 6, 13A & 13B, Pg. 2 ¶ 0030 - 0032, Pg. 3 ¶ 0041 and 0044, Pg. 4 ¶ 0048 [The Examiner asserts that in the proposed combination of Cheng et al. in view of Adams et al. in view of Kakutani further in view of Nakamura, the process of synchronizing a user’s setting on other user’s devices of Nakamura would be utilized to synchronize settings on the mobile devices of Cheng et al.]) Cheng et al. in view of Adams et al. in view of Kakutani and Nakamura are combinable because they are all directed towards methods and systems of managing image data. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Cheng et al. in view of Adams et al. in view of Kakutani with the teachings of Nakamura. This modification would have been prompted in order to enhance the combined base device of Cheng et al. in view of Adams et al. in view of Kakutani with the well-known technique Nakamura applied to a comparable device. Synchronizing a user’s settings on other user’s devices, as taught by Nakamura, would enhance the combined base device by making it more user friendly and easy to use since user settings would be synchronized across their devices thereby helping to ensure that no matter which device a user is utilizing it is configured to that user’s preferences. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that a user’s settings would be synchronized across their devices so as to ensure that the combined base device is as user friendly as possible. Therefore, it would have been obvious to combine Cheng et al. in view of Adams et al. in view of Kakutani with Nakamura to obtain the invention as specified in claim 18.
Claims 9 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Cheng et al. U.S. Publication No. 2019/0197364 A1 in view of Adams et al. U.S. Publication No. 2014/0081926 A1 in view of Kakutani U.S. Publication No. 2012/0188581 A1 as applied to claims 7 and 17 above, and further in view of Dwan et al. U.S. Publication No. 2014/0270530 A1.
- With regards to claim 9, Cheng et al. in view of Adams et al. in view of Kakutani disclose the method of claim 7. Cheng et al. fail to disclose explicitly wherein the blocking of an image is synchronized on other user’s mobile devices. Pertaining to analogous art, Dwan et al. disclose wherein the blocking of an image is synchronized on other user’s mobile devices. (Dwan et al., Fig. 1, Pg. 2 ¶ 0023 and 0028, Pg. 3 ¶ 0033 - 0036, Pg. 4 ¶ 0041 - 0043, Pg. 5 ¶ 0053 - 0054, Pg. 6 ¶ 0064 - 0066) Cheng et al. in view of Adams et al. in view of Kakutani and Dwan et al. are combinable because they are all directed towards methods and systems of managing image data and, similar to Cheng et al. and Adams et al., Dwan et al. is also directed towards managing duplicate images. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Cheng et al. in view of Adams et al. in view of Kakutani with the teachings of Dwan et al. This modification would have been prompted in order to enhance the combined base device of Cheng et al. in view of Adams et al. in view of Kakutani with the well-known technique Dwan et al. applied to a similar device. Synchronizing the blocking of an image on devices of a user, as taught by Dwan et al., would enhance the combined base device by making it more user friendly and easy to use since a duplicate image would only need to be blocked once on one of a user’s devices to achieve the desired result of having that duplicate image blocked on their devices as well as by reducing redundant operations from being carried out multiple times on multiple user devices. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that the blocking of a duplicate image would be synchronized on all devices of a user so as to ensure that the combined base device is as user friendly as possible and help reduce redundant operations from being carried out. Therefore, it would have been obvious to combine Cheng et al. in view of Adams et al. in view of Kakutani with Dwan et al. to obtain the invention as specified in claim 9.
- With regards to claim 19, Cheng et al. in view of Adams et al. in view of Kakutani disclose the system of claim 17. Cheng et al. fail to disclose explicitly wherein the blocking of an image is synchronized on other user’s mobile devices. Pertaining to analogous art, Dwan et al. disclose wherein the blocking of an image is synchronized on other user’s mobile devices. (Dwan et al., Fig. 1, Pg. 2 ¶ 0023 and 0028, Pg. 3 ¶ 0033 - 0036, Pg. 4 ¶ 0041 - 0043, Pg. 5 ¶ 0053 - 0054, Pg. 6 ¶ 0064 - 0066) Cheng et al. in view of Adams et al. in view of Kakutani and Dwan et al. are combinable because they are all directed towards methods and systems of managing image data and, similar to Cheng et al. and Adams et al., Dwan et al. is also directed towards managing duplicate images. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Cheng et al. in view of Adams et al. in view of Kakutani with the teachings of Dwan et al. This modification would have been prompted in order to enhance the combined base device of Cheng et al. in view of Adams et al. in view of Kakutani with the well-known technique Dwan et al. applied to a similar device. Synchronizing the blocking of an image on devices of a user, as taught by Dwan et al., would enhance the combined base device by making it more user friendly and easy to use since a duplicate image would only need to be blocked once on one of a user’s devices to achieve the desired result of having that duplicate image blocked on their devices as well as by reducing redundant operations from being carried out multiple times on multiple user devices. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that the blocking of a duplicate image would be synchronized on all devices of a user so as to ensure that the combined base device is as user friendly as possible and help reduce redundant operations from being carried out. Therefore, it would have been obvious to combine Cheng et al. in view of Adams et al. in view of Kakutani with Dwan et al. to obtain the invention as specified in claim 19.
Claim 28 is rejected under 35 U.S.C. 103 as being unpatentable over Cheng et al. U.S. Publication No. 2019/0197364 A1 in view of Adams et al. U.S. Publication No. 2014/0081926 A1 as applied to claim 27 above, and further in view of Nakamura U.S. Publication No. 2017/0295234 A1.
- With regards to claim 28, Cheng et al. in view of Adams et al. disclose the apparatus of claim 27. Cheng et al. fail to disclose explicitly wherein a user’s setting synchronizes on other user’s devices. Pertaining to analogous art, Nakamura discloses wherein a user’s setting synchronizes on other user’s devices. (Nakamura, Figs. 4 - 6, 13A & 13B, Pg. 2 ¶ 0030 - 0032, Pg. 3 ¶ 0041 and 0044, Pg. 4 ¶ 0048) Cheng et al. in view of Adams et al. and Nakamura are combinable because they are all directed towards methods and systems of managing image data. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Cheng et al. in view of Adams et al. with the teachings of Nakamura. This modification would have been prompted in order to enhance the combined base device of Cheng et al. in view of Adams et al. with the well-known technique Nakamura applied to a comparable device. Synchronizing a user’s settings on other user’s devices, as taught by Nakamura, would enhance the combined base device by making it more user friendly and easy to use since user settings would be synchronized across their devices thereby helping to ensure that no matter which device a user is utilizing it is configured to that user’s preferences. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that a user’s settings would be synchronized across their devices so as to ensure that the combined base device is as user friendly as possible. Therefore, it would have been obvious to combine Cheng et al. in view of Adams et al. with Nakamura to obtain the invention as specified in claim 28.
Claim 29 is rejected under 35 U.S.C. 103 as being unpatentable over Cheng et al. U.S. Publication No. 2019/0197364 A1 in view of Adams et al. U.S. Publication No. 2014/0081926 A1 as applied to claim 27 above, and further in view of Dwan et al. U.S. Publication No. 2014/0270530 A1.
- With regards to claim 29, Cheng et al. in view of Adams et al. disclose the apparatus of claim 27, wherein the blocker allows for blocking of viewing the at least one received image for the user of the receiving mobile device. (Cheng et al., Abstract, Figs. 1 - 3, Pg. 3 ¶ 0023 and 0030 - 0031, Pg. 4 ¶ 0035, Pg. 4 ¶ 0037 - Pg. 5 ¶ 0040, Pg. 5 ¶ 0042, Pg. 6 ¶ 0045 and 0047, Pg. 8 ¶ 0066, Pg. 9 ¶ 0071 and 0073, Pg. 10 ¶ 0080 - 0081, Pg. 11 ¶ 0086 - 0087 and 0091 - 0093, Pg. 16 ¶ 0137)Cheng et al. fail to disclose explicitly blocking the at least one received image across multiple devices. Pertaining to analogous art, Dwan et al. disclose wherein the blocker allows for blocking of viewing the at least one received image across multiple devices. (Dwan et al., Fig. 1, Pg. 2 ¶ 0023 and 0028, Pg. 3 ¶ 0033 - 0036, Pg. 4 ¶ 0041 - 0043, Pg. 5 ¶ 0053 - 0054, Pg. 6 ¶ 0064 - 0066, Pg. 8 ¶ 0076) Cheng et al. in view of Adams et al. and Dwan et al. are combinable because they are all directed towards methods and systems of managing image data as well as managing duplicate images. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Cheng et al. in view of Adams et al. with the teachings of Dwan et al. This modification would have been prompted in order to enhance the combined base device of Cheng et al. in view of Adams et al. with the well-known technique Dwan et al. applied to a similar device. Blocking a received duplicate image from viewing across multiple devices, as taught by Dwan et al., would enhance the combined base device by making it more user friendly and easy to use since a duplicate image would only need to be blocked once on one of a user’s or group of users’ devices to achieve the desired result of blocking duplicate images from being viewed by users regardless of what device they are utilizing to browse a collection of images as well as by reducing redundant operations from being carried out multiple times on multiple devices. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that duplicate images would be blocked from viewing across multiple devices of the user of the receiving mobile device so as to ensure that the combined base device is as user friendly as possible by blocking duplicate images from being viewed by users regardless of the device a user utilizes to browse their images and to help reduce redundant operations from being carried out. Therefore, it would have been obvious to combine Cheng et al. in view of Adams et al. with Dwan et al. to obtain the invention as specified in claim 29.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ERIC RUSH whose telephone number is (571) 270-3017. The examiner can normally be reached 9am - 5pm Monday - Friday.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached at (571) 270 - 5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ERIC RUSH/Primary Examiner, Art Unit 2677