Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114.
Response to Amendment
The Request for continued Examination, filed on 07/23/2025, has been entered and acknowledged by the Examiner. In the Amendment, applicant amended claims 1, 10 and 16.
As to Arguments and Remarks filed in the Amendment, please see Examiner’s responses shown after Rejections - 35 U.S.C § 103
Please note claims 1-5 and 10-28 are pending.
Claims 6-9 are cancelled.
Information Disclosure Statement
The information disclosure statement (IDS) filed on 07/23/2025 has been considered (see form-1449, MPEP 609).
Examiner Notes
Examiner cites particular columns, paragraphs, figures and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-5 and 10-12, 15-23, 26-28 are rejected under 35 U.S.C. 103 as being unpatentable over He et al. (US PGPUB 2018/0060659, hereinafter He), in view of Ido et al. (US PGPUB 2018/0255267, hereinafter Ido) and further in view of Shirwadkar et al. (US PGPUB 2017/0098013, hereinafter Shirwadkar).
As per s claim 1, He discloses:
(Currently Amended) A method of creating new metadata from existing metadata, comprising the steps of:
Accessing by a processor, for an individual user, over a network, a first piece of data stored at a first location and a second piece of data stored at a second location that is different from the first location (He, e.g., [0007] and [0033], “access to a set of media content items…” and [0027], “…a first location associated with the first image…A second location associated with the second image…”) (first source = first location is different from the second source);
retrieving by the processor, from the first and second pieces of data, a first metadata related to the first piece of data, and a second metadata related to the second piece of data (He, e.g., [0024], [0027], [0074] and [0093-0094], “…a first location associated with the first image…A second location associated with the second image…”);
creating, using a processor a correlated metadata based on the first metadata and the second metadata, wherein the correlated metadata links the first piece of data and the second piece and comprises derived information that reveals a new relationship or detail between the first piece of data and the second piece of data, wherein the derived information is not present in the first metadata and the second metadata (He, e.g., [0027], [0077], “…set of images based on correlation with the visual pattern templates…”) and [0106], “…metadata for the first image, a first location associated with the first image. A second location associated with the second image can also be identified, such as via metadata for the second image. The first location can indicate where the first image was captured or taken, and the second location can indicate where the second image was captured…”, further [0136] disclose linking similar data) and wherein the information is based on an analysis of the first metadata and the second metadata (He, e.g., [0077] and [0105-0106], “…analysis can be performed on content items to determine their potential relevance with a particular subject, topic, and/or theme. The classification analysis can be based on myriad techniques, for example. Content items constituting or including images or text can be analyzed and classified based on any suitable processing technique. For example, an image classification technique can gather contextual cues for a sample set of images and use the contextual cues to generate a training set of images…”) and
storing, by the processor, the correlated metadata in a non-transitory memory used by a search engine to enable the search engine to search the correlated metadata, wherein the non-transient memory is separate from the first location and the second location.
To make records clearer regarding to features of “correlated metadata based on the first metadata and second metadata” (although as stated above he teaches the features of correlated metadata).
However Ido, in an analogous art, discloses “correlated metadata based on the first metadata and second metadata” (Ido, e.g., [0013-0015], [0070], “…linking location information to image data” and [0072-0076], “…generates a table which correlates (1) Image file ID, (2) Imaging date and time, and (3) location information (Blank) each other…”). Thus, it would have been obvious to one of ordinary skill in the art BEFORE the effective filling date of the claimed invention to combine the teaching of Ido and He to linking location information to image data in a storage device (Ido, e.g., [0013])
To make records clearer regarding to features of “links the first piece of data and the second piece and comprises derived information that reveals a new relationship or detail between the first piece of data and the second piece of data, wherein the derived information is not present in the first metadata and the second metadata” and “storing, by the processor, the correlated metadata in a non-transitory memory used by a search engine to enable the search engine to search the correlated metadata, wherein the non-transient memory is separate from the first location and the second location “.
However Shirwadkar, in an analogous art, discloses “links the first piece of data and the second piece (Shirwadkar, e.g., figs. 2 and 12, associating with texts description, [0056-0058], [0086-0087], “... cross-linking relevant data from those spaces... cross-link data across information different spaces, or information from different sources in the same space...”) and comprises derived information that reveals a new relationship or detail between the first piece of data and the second piece of data, wherein the derived information is not present in the first metadata and the second metadata” (Shirwadkar, e.g., [0061-0064], “...specified in the query, or estimated based on the context, trending events, or any knowledge derived from the person-centric space...” and [0088], [0091], “…processes and analyzes the information in the person-centric space to derive analytic results in order to better understand the person-centric space …”) and “storing, by the processor, the correlated metadata in a non-transitory memory used by a search engine to enable the search engine to search the correlated metadata, wherein the non-transient memory is separate from the first location and the second location” (Shirwadkar, e.g., [0090-0091], “…cross-linking is done based on the same cross-linking keys associated with these pieces of person-centric data... data is derived from the person-centric space....Based on the entities and relationships, person-centric knowledge can be derived and stored in the person-centric knowledge database…” and further see [0105], [0111-0113], “... store the identified individual and corresponding identified relationship in a person-centric knowledge database associated with the user...”). Thus, it would have been obvious to one of ordinary skill in the art BEFORE the effective filling date of the claimed invention to combine the teaching of Shirwadkar, Ido and He to search in different social media applications in the semi-private space and link related information residing in different sources (Shirwadkar, e.g., [0012-0015])
As per as claim 2, the combination of Shirwadkar, Ido and He disclose:
(Original) The method of claim 1, wherein the first metadata is appended to the first piece of data (He, e.g., [0024], [0027], [0074] and [0093-0094]) and (Ido, e.g., [0013-0015], [0072-0076]); and
wherein the second metadata is appended to the second piece of data (He, e.g., [0024], [0027], [0074] and [0093-0094]) and (Ido, e.g., [0013-0015], [0072-0076]).
As per as claim 3, the combination of Shirwadkar, Ido and He disclose:
(Previously Presented) The method of claim 2, wherein the retrieving step comprises the steps of:
scanning, using a processor the first piece of data to generate a first original metadata (Ido, e.g., [0013-0015], [0072-0076], “…generating image files, each including image data of a captured image and a time stamp of when the image was captured, determining a first time stamp and a second time stamp within a group of the image files…”) and (He, e.g., [0024], [0027], [0074] and [0093-0094]);
scanning, using a processor, the second piece of data to generate a second original metadata (Ido, e.g., [0013-0015], [0072-0076], “…generating image files, each including image data of a captured image and a time stamp of when the image was captured, determining a first time stamp and a second time stamp within a group of the image files…”) and (He, e.g., [0024], [0027], [0074] and [0093-0094]); and
wherein the creating step comprises the step of adding, to the correlated metadata, original information based on the analysis of the first original metadata and the second original metadata (He, e.g., [0027], [0077], “…set of images based on correlation with the visual pattern templates…”) and [0106], “…metadata for the first image, a first location associated with the first image. A second location associated with the second image can also be identified, such as via metadata for the second image. The first location can indicate where the first image was captured or taken, and the second location can indicate where the second image was captured…”, further [0136] disclose linking similar data) and (Ido, e.g., [0013-0015], [0072-0076]).
As per as claim 4, the combination of Shirwadkar, Ido and He disclose:
(Previously Presented) The method of claim 1, wherein the retrieving step comprises the steps of:
scanning, using a processor the first piece of data to generate the first metadata (He, e.g., [0024], [0027], [0074] and [0093-0094]) and (Ido, e.g., [0013-0015], [0072-0076]); and
scanning, using a processor the second data to generate the second metadata (He, e.g., [0024], [0027], [0074] and [0093-0094]) and (Ido, e.g., [0013-0015], [0072-0076]).
As per as claim 5, the combination of Shirwadkar, Ido and He disclose:
(Previously Presented) The method of claim 1, further comprising the step of: storing, in a non- transient memory: the first metadata and the second metadata, either before or after the creating step (He, e.g., [0024], [0027], [0074] and [0093-0094]) and (Ido, e.g., [0013-0015], [0072-0076]) and further see (Barraclough, e.g., fig. 8, associating with texts description, (store data in a database ) and [0038], [0046-49], [0052], [0077], [0083], [0093-0095], “…a search engine to correlate the text subset with a particular article or page of the incoming article…”).
Claim 10 is essentially the same as claim 1 except that they set forth the claimed invention as a system rather a method, respectively and correspondingly, therefore is rejected under the same reasons set forth in rejections of claim 1.
As per as claim 11, the combination of Shirwadkar, Ido and He disclose:
(Previously Presented) The system of claim 10, wherein the non-transient memory stores the first metadata and the second metadata (He, e.g., [0024], [0027], [0074] and [0093-0094]) and (Ido, e.g., [0013-0015], [0072-0076]) and further see (Shirwadkar, e.g., [0056-0058], “… search more content via the person-centric INDEX system (this function may be similar to conventional search engine) that will lead to the continuously expansion of the person-centric space…”).
As per as claim 12, the combination of Shirwadkar, Ido and He disclose:
(Previously Presented) The system of claim 10, wherein the search engine comprises a search engine processor configured with instructions executable to:
receiving, from the user, an inquiry related to at least one of the first and second pieces of data (He, e.g., [0024], [0027], [0074] and [0093-0094]) and (Ido, e.g., [0013-0015], [0072-0076]);
analyzing the correlated metadata to determine a response to the inquiry; and sending response data to a destination, based on the response (He, e.g., [0077] and [0105-0106], “…analysis can be performed on content items to determine their potential relevance with a particular subject, topic, and/or theme. The classification analysis can be based on myriad techniques, for example. Content items constituting or including images or text can be analyzed and classified based on any suitable processing technique. For example, an image classification technique can gather contextual cues for a sample set of images and use the contextual cues to generate a training set of images…”) and (Ido, e.g., [0013-0015], [0072-0076]), the response data comprising at least one of the first piece of data from the first location and the second piece of data from the second location (He, e.g., [0024], [0027], [0074] and [0093-0094]) and (Ido, e.g., [0013-0015], [0070], [0072-0076]); and
a first link to the first piece of data or a second link to the second piece of data (He, e.g., [0024], [0027], [0074] and [0093-0094]) and (Ido, e.g., [0013-0015], [0070], [0072-0076]) and further see (Shirwadkar, e.g., figs. 2 and 12, associating with texts description, [0056-0058], [0086-0087], “... cross-linking relevant data from those spaces... cross-link data across information different spaces, or information from different sources in the same space...”).
As per as claim 15, the combination of Shirwadkar, Ido and He disclose: (Previously Presented) The system of claim 12, wherein the search engine further comprise instructions executable to implement:
formulating based on an analysis of the correlated metadata, an independent inquiry to an independent search engine (He, e.g., [0033], [0077] and [0105-0106], [0115], “…analysis can be performed on content items to determine their potential relevance with a particular subject, topic, and/or theme. The classification analysis can be based on myriad techniques, for example. Content items constituting or including images or text can be analyzed and classified based on any suitable processing technique. For example, an image classification technique can gather contextual cues for a sample set of images and use the contextual cues to generate a training set of images…”) and (Ido, e.g., [0013-0015], [0072-0076]); and
acquiring, from the independent search engine, an independent search result related to the independent inquiry wherein the response data comprises the independent search results (He, e.g., [0033], [0035], [0115] and [0134], (search results) and further (Shirwadkar, e.g., [0058], [0061], [0074], “search results”).
As per s claim 16, He discloses:
(Previously Presented) A computer program product comprising a non-transitory, computer-readable medium storing thereon a set of computer-executable instructions, the set of computer- executable instructions comprising instructions for:
scanning a plurality of data items to extract metadata from the plurality of data items (He, e.g., [0024], [0027], [0033], [0065], [0074] and [0093-0094]), “…identifier, a topic, or a classification for the at least one object. A search through the set of media content items…to identify a subset of media content items that depict the at least one object…”);
analyzing a set of data related to the plurality of data items to generate an indicator of a correlation for the data items not included in the metadata extracted from the plurality of data items, wherein the set of data related to the plurality of data items (He, e.g., [0033], [0077], [0134-0136]“… image classification analysis can be performed on content items to determine their potential relevance with a particular subject, topic, and/or theme…”); and
storing the indicator and the extracted metadata in a memory for use by a search engine, wherein the indicator is linked to the extracted metadata in the memory (He, e.g., [0072], [0093], [0096-0097], [0104], “… store and maintain various types of data. In some implementations, the at least one data store … can store information associated with the social networking system…”) .
To make records clearer regarding to features of “correlated metadata based on the first metadata and second metadata” (although as stated above he teaches the features of correlated metadata).
However Ido, in an analogous art, discloses “correlated metadata based on the first metadata and second metadata” (Ido, e.g., [0013-0015], [0070], “…linking location information to image data” and [0072-0076], “…generates a table which correlates (1) Image file ID, (2) Imaging date and time, and (3) location information (Blank) each other…”). Thus, it would have been obvious to one of ordinary skill in the art BEFORE the effective filling date of the claimed invention to combine the teaching of Ido and He to linking location information to image data in a storage device (Ido, e.g., [0013])
To make records clearer regarding to features of “the metadata extracted from the plurality of data items, and wherein the indicator comprises derived information that reveals a new relationship or detail between the plurality of data items” and “storing the indicator and the extracted metadata in a memory for use by a search engine, wherein the indicator is linked to the extracted metadata in the memory”.
However Shirwadkar, in an analogous art, discloses “the metadata extracted from the plurality of data items” (Shirwadkar, e.g., figs. 2 and 12, associating with texts description, [0056-0058], [0086-0087], “... cross-linking relevant data from those spaces... cross-link data across information different spaces, or information from different sources in the same space...”), and “wherein the indicator comprises derived information that reveals a new relationship or detail between the plurality of data items” (Shirwadkar, e.g., [0061-0064], “...specified in the query, or estimated based on the context, trending events, or any knowledge derived from the person-centric space...” and [0088], [0091], “…processes and analyzes the information in the person-centric space to derive analytic results in order to better understand the person-centric space …”) and “storing the indicator and the extracted metadata in a memory for use by a search engine, wherein the indicator is linked to the extracted metadata in the memory” (Shirwadkar, e.g., [0090-0091], “…cross-linking is done based on the same cross-linking keys associated with these pieces of person-centric data... data is derived from the person-centric space....Based on the entities and relationships, person-centric knowledge can be derived and stored in the person-centric knowledge database…” and further see [0105], [0111-0113], “... store the identified individual and corresponding identified relationship in a person-centric knowledge database associated with the user...”). Thus, it would have been obvious to one of ordinary skill in the art BEFORE the effective filling date of the claimed invention to combine the teaching of Shirwadkar, Ido and He to search in different social media applications in the semi-private space and link related information residing in different sources (Shirwadkar, e.g., [0012-0015])
As per as claim 17, the combination of Shirwadkar, Ido and He disclose:
(Previously presented) The computer program product of Claim 16, wherein the set of computer executable instructions further comprises instructions for:
analyzing a first data item from the plurality of data items to extract a detail from the first data item (He, e.g., [0077] and [0105-0106], “…analysis can be performed on content items to determine their potential relevance with a particular subject, topic, and/or theme. The classification analysis can be based on myriad techniques, for example. Content items constituting or including images or text can be analyzed and classified based on any suitable processing technique. For example, an image classification technique can gather contextual cues for a sample set of images and use the contextual cues to generate a training set of images…”) and (Ido, e.g., [0013-0015], [0072-0076]); and
based on the indicator of the correlation, linking the detail extracted from the first data item to the extracted metadata in the memory (Ido, e.g., [0013-0015], [0070], “…linking location information to image data” and [0072-0076], “…generates a table which correlates (1) Image file ID, (2) Imaging date and time, and (3) location information (Blank) each other…”) and further see (He, [0027], [0077] and [0106]) and see (Shirwadkar, e.g., figs. 2 and 12, associating with texts description, [0056-0058], [0086-0087], “... cross-linking relevant data from those spaces... cross-link data across information different spaces, or information from different sources in the same space...”).
As per as claim 18, the combination of Shirwadkar, Ido and He disclose:
(Previously presented) The computer program product of Claim 16, wherein the set of computer executable instructions further comprises instructions for:
extracting details from the plurality of data items, wherein the set of data related to the plurality of data items comprises the details extracted from the plurality of data items item (He, e.g., [0077] and [0105-0106], “…analysis can be performed on content items to determine their potential relevance with a particular subject, topic, and/or theme. The classification analysis can be based on myriad techniques, for example. Content items constituting or including images or text can be analyzed and classified based on any suitable processing technique. For example, an image classification technique can gather contextual cues for a sample set of images and use the contextual cues to generate a training set of images…”) and (Ido, e.g., [0013-0015], [0072-0076]).
As per as claim 19, the combination of Shirwadkar, Ido and He disclose:
(Previously presented) The computer program product of Claim 18, further comprising:
analyzing a first data item from the plurality of data items to extract an additional detail from the first data item (He, e.g., [0077] and [0105-0106], “…analysis can be performed on content items to determine their potential relevance with a particular subject, topic, and/or theme. The classification analysis can be based on myriad techniques, for example. Content items constituting or including images or text can be analyzed and classified based on any suitable processing technique. For example, an image classification technique can gather contextual cues for a sample set of images and use the contextual cues to generate a training set of images…”) and (Ido, e.g., [0013-0015], [0072-0076]); and
based on the indicator of the correlation, linking the additional detail extracted from the first data item to the extracted metadata in the memory (Ido, e.g., [0013-0015], [0070], “…linking location information to image data” and [0072-0076], “…generates a table which correlates (1) Image file ID, (2) Imaging date and time, and (3) location information (Blank) each other…”) and further see (He, [0027], [0077] and [0106]) and see (Shirwadkar, e.g., figs. 2 and 12, associating with texts description, [0056-0058], [0086-0087], “... cross-linking relevant data from those spaces... cross-link data across information different spaces, or information from different sources in the same space...”).
As per as claim 20, the combination of Shirwadkar, Ido and He disclose:
(Previously Presented) The computer program product of Claim 18, wherein analyzing the first data item from the plurality of data items to extract the additional detail comprises performing image recognition to recognize the additional detail in an image (He, e.g., [0052-0053], [0077] and [0105-0106], “…analysis can be performed on content items to determine their potential relevance with a particular subject, topic, and/or theme. The classification analysis can be based on myriad techniques, for example. Content items constituting or including images or text can be analyzed and classified based on any suitable processing technique. For example, an image classification technique can gather contextual cues for a sample set of images and use the contextual cues to generate a training set of images…”) and (Ido, e.g., [0013-0015], [0072-0076]) and see (Shirwadkar, e.g., figs. 2 and 12, associating with texts description, [0056-0058], [0086-0087], “... cross-linking relevant data from those spaces... cross-link data across information different spaces, or information from different sources in the same space...”).
As per as claim 21, the combination of Shirwadkar, Ido and He disclose:
(Previously presented) The computer program product of Claim 20, wherein analyzing the first data item from the plurality of data items to extract the additional detail comprises performing a sentiment analysis, wherein the additional detail is a sentiment extracted from the first data item (He, e.g., [0052-0053], [0077] and [0105-0106], “…analysis can be performed on content items to determine their potential relevance with a particular subject, topic, and/or theme. The classification analysis can be based on myriad techniques, for example. Content items constituting or including images or text can be analyzed and classified based on any suitable processing technique. For example, an image classification technique can gather contextual cues for a sample set of images and use the contextual cues to generate a training set of images…”) and (Ido, e.g., [0013-0015], [0072-0076]).
As per as claim 22, the combination of Shirwadkar, Ido and He disclose:
(Previously presented) The computer program product of Claim 16, wherein the indicator of the correlation is usable by the search engine to expand a result of a user inquiry (Ido, e.g., [0013-0015], [0070], “…linking location information to image data” and [0072-0076], “…generates a table which correlates (1) Image file ID, (2) Imaging date and time, and (3) location information (Blank) each other…”) and further see (He, [0027], [0077] and [0106]) and see (Shirwadkar, e.g., figs. 2 and 12, associating with texts description, [0056-0058], [0086-0087], “... cross-linking relevant data from those spaces... cross-link data across information different spaces, or information from different sources in the same space...”).
As per as claim 23, the combination of Shirwadkar, Ido and He disclose:
(Previously Presented) The method of Claim 1, further comprising:
receiving, from a user, a search inquiry related to at least one of the first piece of data or the second piece of data (He, e.g., [0024], [0027], [0074] and [0093-0094]) and (Ido, e.g., [0013-0015], [0072-0076]);
processing, by the search engine, the search inquiry, processing the search inquiry comprising: analyzing the correlated metadata to determine a response to the search inquiry (He, e.g., [0077] and [0105-0106], “…analysis can be performed on content items to determine their potential relevance with a particular subject, topic, and/or theme. The classification analysis can be based on myriad techniques, for example. Content items constituting or including images or text can be analyzed and classified based on any suitable processing technique. For example, an image classification technique can gather contextual cues for a sample set of images and use the contextual cues to generate a training set of images…”) and (Ido, e.g., [0013-0015], [0072-0076]); and
based on the response, sending response data to a destination, the response data comprising at least one of the first piece of data, a link to the first piece of data, the second piece of data, or a link to the second piece of data (He, e.g., [0024], [0027], [0074] and [0093-0094]) and (Ido, e.g., [0013-0015], [0070], [0072-0076]) and further see (Shirwadkar, e.g., figs. 2 and 12, associating with texts description, [0056-0058], [0086-0087], “... cross-linking relevant data from those spaces... cross-link data across information different spaces, or information from different sources in the same space...”).
As per as claim 26, the combination of Shirwadkar, Ido and He disclose:
(Previously Presented) The method of Claim 23, wherein processing the search inquiry comprises:
formulating based on the analysis of the correlated metadata, an independent inquiry to an independent search engine (He, e.g., [0033], [0077] and [0105-0106], [0115], “…analysis can be performed on content items to determine their potential relevance with a particular subject, topic, and/or theme. The classification analysis can be based on myriad techniques, for example. Content items constituting or including images or text can be analyzed and classified based on any suitable processing technique. For example, an image classification technique can gather contextual cues for a sample set of images and use the contextual cues to generate a training set of images…”) and (Ido, e.g., [0013-0015], [0072-0076]); and
acquiring, from the independent search engine, an independent search result related to the independent inquiry, wherein the response data comprises the independent search result (He, e.g., [0033], [0035], [0115] and [0134]) and (Ido, e.g., figs. 6 and 9, associating with texts description) and (Shirwadkar, e.g., [0056-0058], “… search more content via the person-centric INDEX system (this function may be similar to conventional search engine) that will lead to the continuously expansion of the person-centric space…”).
As per as claim 27, the combination of Shirwadkar, Ido and He disclose:
(Previously Presented) The method of Claim 23, wherein the first metadata and the second metadata are different dimensions of metadata and wherein the correlated metadata represents a relationship across the different dimensions of metadata (Shirwadkar, e.g., figs. 2 and 12, associating with texts description, [0056-0058], [0086-0087], “... cross-linking relevant data from those spaces... cross-link data across information different spaces, or information from different sources in the same space...”).
As per as claim 28, the combination of Shirwadkar, Ido and He disclose:
(Previously Presented) The computer program product of Claim 16, wherein the indicator of the correlation represents a relationship across multiple dimensions of metadata (Ido, e.g., [0013-0015], [0070], “…linking location information to image data” and [0072-0076], “…generates a table which correlates (1) Image file ID, (2) Imaging date and time, and (3) location information (Blank) each other…”) and further see (He, [0027], [0077] and [0106]) and see (Shirwadkar, e.g., figs. 2 and 12, associating with texts description, [0056-0058], [0086-0087], “... cross-linking relevant data from those spaces... cross-link data across information different spaces, or information from different sources in the same space...”).
Claims 13-14 and 24-25 are rejected under 35 U.S.C. 103 as being unpatentable over He et al. (US PGPUB 2018/0060659, hereinafter He), in view of Ido et al. (US PGPUB 2018/0255267, hereinafter Ido) and further in view of Shirwadkar et al. (US PGPUB 2017/0098013, hereinafter Shirwadkar) and further in view of Hamilton et al. (US PGPUB 2017/0332231, hereinafter Hamilton).
As per as claim 24, the combination of Shirwadkar, Ido and He disclose:
(Previously Presented) The method of Claim 23, further comprising: encrypting at least one of the first piece of data or the second piece of data as encrypted data, wherein the response data comprises the encrypted data; and appending to a blockchain a record of sending the response data to the destination.
The combination of Shirwadkar, He and Ido disclose first piece of data or second piece of data (He, e.g., [0024], [0027], [0074] and [0093-0094]) and (Ido, e.g., [0013-0015], [0070], [0072-0076]), but the combination of He and Ido do not explicitly disclose “encrypting data”; and “appending, to a blockchain, a record of the send step” .
However Hamilton, in an analogous art, discloses ““encrypting data” (Hamilton, e.g., [0014], [0017-0024], “…generates, using the encryption type, a second encryption value for a second data entry in the second input file; compares the encryption value with the second encryption value; determines whether the encryption value matches the second encryption value; in response to determining the encryption value matches the second encryption value”), and “appending, to a blockchain, a record of the send step” (Hamilton, e.g., [0033], [0069-0073], “…fingerprint followed by the data entry followed by the initialization vector. The combination of the fingerprint, the data record…”). Thus, it would have been obvious to one of ordinary skill in the art BEFORE the effective filling date of the claimed invention to combine the teaching of Hamilton, Shirwadkar, He and Ido to provide a solution to compare data from the first data system with data from the second data system without sharing data between the first data system and the second data system (Hamilton, e.g., [0002]).
As per as claim 25, the combination of Shirwadkar, Ido and He disclose:
(Previously Presented) The method of Claim 23, wherein the response data comprises at least one of the first piece of data or the second piece of data and wherein the method further comprises: transcoding the at least one of the first piece of data and the second piece of data, at least one of prior to the destination or once delivered to the destination, to permit the destination to access the at least one of the first piece of data or the second piece of data.
The combination of Shirwadkar, He and Ido disclose permit access the at least one of the first piece of data and the second piece of data (He, e.g., [0024], [0027], [0074] and [0093-0094]) and (Ido, e.g., [0013-0015], [0070], [0072-0076]), but the combination of Shirwadkar, He and Ido do not explicitly disclose “transcoding the at least one of the first piece of data and the second piece of data, at least one of prior to the destination and once delivered to the destination, to permit the destination to access the at least one of the first piece of data and the second piece of data”.
However Hamilton, in an analogous art, discloses “transcoding the at least one of the first piece of data and the second piece of data, at least one of prior to the destination and once delivered to the destination, to permit the destination to access the at least one of the first piece of data and the second piece of data” (Hamilton, e.g., [0067], “…The fingerprint may be an alphanumeric code. The fingerprint may be generated based on the data entry or type of data entry. The fingerprint may be used to index a list of data entries. The fingerprint may be a static code. The fingerprint may be appended to the beginning of the data entry (e.g., before the first character of the data entry)…” and [0069-0073], “…the encryption signature may be a session identification code associated with a certain data session. In some embodiments, the encryption signature may be a quantity or value that is not based on the fingerprint and/or the initialization vector…”). Thus, it would have been obvious to one of ordinary skill in the art BEFORE the effective filling date of the claimed invention to combine the teaching of Hamilton, Shirwadkar, He and Ido to provide a solution to compare data from the first data system with data from the second data system without sharing data between the first data system and the second data system (Hamilton, e.g., [0002]).
Claims 13-14 are essentially the same as claims 24-25 except that they set forth the claimed invention as a system rather a method, respectively and correspondingly, therefore is rejected under the same reasons set forth in rejections of claims 24-25.
Response to Arguments
The Examiner respectfully reminds applicant of the broadest reasonable interpretation standard (See MPEP 2111), "During examination, the claims must be interpreted as broadly as their terms reasonably allow." In re American Academy of Science Tech Center, 367 F.3d 1359, 1369, 70 USPQ2d 1827, 1834 (Fed. Cir. 2004) (The USPTO uses a different standard for construing claims than that used by district courts; during examination the USPTO must give claims their broadest reasonable interpretation.) In Phillips v. AWH Corp., 415 F.3d 1303, 75 USPQ2d 1321 (Fed. Cir. 2005), the court further elaborated on the “broadest reasonable interpretation" standard and recognized that “The Patent and Trademark Office (“PTO") determines the scope of claims in patent applications not solely on the basis of the claim language, but upon giving claims their broadest reasonable construction." Thus, when interpreting claims, the courts have held that Examiners should (1) interpret claim terms as broadly as their terms reasonably allows and (2) interpret claim phrases as broadly as their construction reasonably allows.
Applicant’s arguments filed 07/23/2025 with respect to claims 1-5, 10-28 have been considered but are moot in view of the new ground(s) of rejection necessitated by applicant's amendment to the claims. Applicant's newly amended features are taught implicitly, expressly, or impliedly by the prior art of record (See the new ground(s) of rejection set forth herein above).
The Examiner respectfully submits that, with respect to the totally newly amended subject matter, the Examiner respectfully cited proper paragraphs from cited reference to reject the claim in responsive to the newly amended, please refer to the corresponding section of the office action.
Additional Art Considered
The prior art made of record and not relied upon is considered pertinent to the Applicants’ disclosure.
The following patents and papers are cited to further show the state of the art at the time of Applicants’ invention with respect to receiving multiple data points, relating them and generating a new data point that is a corollary to the multiple points which performs data reticulation using the scanning engine to access a first piece of data stored at a first location and a second piece of data stored at a second location. The scanning engine further retrieves from the first and second pieces of data first and second metadata, related respectively.
a. Rajan et al. (US PGPUB 2017/0098283, hereafter Rajan); “Methods, Systems and Techniques For Blending Online content From Multiple Disparate Content Sources Including a personal Content Source Or A Semi-Personal Content Source” discloses “providing content from multiple disparate sources including a person's personal data sources and non-personal data sources which receiving a request for content from a person; obtaining first content from a first source private to the person based on the request; obtaining second content from at least one second source based on the request; blending the first content from the first source and the second content from the at least one second source to generate a blended content; and providing the blended content to the person in response to the request”.
Rajan also teaches linkage between data between the private and semi-private spaces [0012], [0051-0054].
Rajan further teaches classifies data [0075-0080].
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. See form 892.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TUAN A PHAM whose telephone number is (571)270-3173. The examiner can normally be reached M-F 7:45 AM - 6:30 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tony Mahmoudi can be reached on 571-272-4078. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/TUAN A PHAM/Primary Examiner, Art Unit 2163