DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
This Office Action has been issued in response to the amendment filed on 12/17/2025. Claims 8-9 and 17-18 are canceled. Claims 1-7, 10-16, and 19-20 are pending. Applicants’ arguments have been carefully and respectfully considered in light of the instant amendment and are not persuasive, as they relate to the claim rejections under 35 U.S.C. 103., as will be discussed below. Accordingly, this action has been made FINAL.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-7, 10-16, and 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Stroila et al (US 20140136098 A1), in view of Kim (US 20070011042 A1), and further in view of Carter et al (US 20200059269 A1).
As to claim 1, Stroila teaches A method of selecting digital assets (DAs) for a
initiating, by the end-user device of the first participant of the (Stroila discloses automatic photo capture for a camera in a mobile device in [0020].);
identifying, by the end-user device, a sharable DA trigger (NOTE: Applicant defines a sharable DA trigger, by way of example, as time or location. Stroila discloses a list of events may be used to trigger the automatic photo capture. The photo capture may be triggered when a frustum or field of view of the camera in the mobile device intersects an expected location and time of an event. See [0020]-[0024]);
capturing, by a camera of the end-user device, a DA during the camera session after identifying the sharable DA trigger (NOTE: Applicant defines a sharable DA trigger, by way of example, as time or location. Stroila discloses a list of events may be used to trigger the automatic photo capture. The photo capture may be triggered when a frustum or field of view of the camera in the mobile device intersects an expected location and time of an event. See [0020]-[0024]); and
selecting, by the end-user device, the captured DA for the (Stroila discloses automatically capturing images when a mobile device comes within proximity of an event at a particular time. These images are then collected, stored, and used to update a navigational database. The captured image may be presented to a database administrator for identification of the object of interest in the captured image. The identification of the object may be added to a point of interest database. See [0034]).
Stoila fails to teach a shared DA library; in response to receiving the onboarding request, performing a DA analysis of previously captured DA in a personal DA library of the first participant, and transferring a DA from the personal DA library to the shared DA library based on the DA analysis; and displaying a list of suggested DA for the shared DA library based on the DA analysis; receiving inputs from the first participant regarding the displayed list of suggested DA for the shared DA library; and transferring at least some of the suggested DA from the personal DA library of the first participant to the shared DA library based on the received inputs (Kim discloses allowing a user to access his or her personal image libraries and analyze the images he or she would like to share with other users. Other users have the ability to access a shared image library in order to view/download images other users have shared. See [0075]-[0077]. While Kim may not actually transfer images to a separate library, it should be noted that this is a design feature and it would have been obvious to transfer images to a separate storage area accessible to other users instead of allowing access to a user’s personal image library.).
However, Kim teaches a shared DA library (Kim discloses a shared library where users can post images to share and download images to use. See [0075]);
in response to receiving the onboarding request (Kim [0075] discloses a user may choose to upload an image to one of these libraries if the user is expecting to reuse the image in different nodes, or across different semantic networks), performing a DA analysis of previously captured DA in a personal DA library of the first participant, and transferring a DA from the personal DA library to the shared DA library based on the DA analysis (Kim discloses allowing a user to access his or her personal image library and analyze the images he or she would like to share with other users. A shared library is where users can post (i.e. transfer) images to share and download (i.e. transfer) images to use. Other users have the ability to access a shared image library in order to view/download images other users have shared. See [0075]-[0077].); and
displaying a list of suggested DA for the shared DA library based on the DA analysis; receiving inputs from the first participant regarding the displayed list of suggested DA for the shared DA library; and transferring at least some of the suggested DA from the personal DA library of the first participant to the shared DA library based on the received inputs (Kim discloses allowing a user to access his or her personal image libraries and analyze the images he or she would like to share with other users. A shared library is where users can post (i.e. transfer) images to share and download (i.e. transfer) images to use. Other users have the ability to access a shared image library in order to view/download images other users have shared. See [0075]-[0077].)
Before the effective filing date, it would have been obvious to one of ordinary skill in the art, to modify the teachings of Stroila to incorporate the Methods And Apparatus For Storing, Organizing, Sharing And Rating Multimedia Objects And Documents as taught by Kim for the purpose of providing a method of rating multimedia documents by allowing viewers to provide feedback regarding a semantic network's value or usefulness and then calculating a rating in accordance with the received feedback.
Stoila and Kim fail to teach transmitting, by an end-user device of a first participant, an onboarding request related to the shared DA library.
However, Carter teaches transmitting, by an end-user device of a first participant, an onboarding request related to the shared DA library (Carter discloses the user may tap at the portable electronic device (i.e. an end-user device of a first participant) the short range communication enabled object in order to perform a predetermined action (i.e. onboard request) such as saving/sharing a digital asset to another device, in this case could be another wrist band (i.e. this now becomes a shared DA library) in [0086]-[0095]).
Before the effective filing date, it would have been obvious to one of ordinary skill in the art, to modify the teachings of Stoila and Kim to incorporate the PORTABLE ELECTRONIC DEVICE FOR FACILITATING A PROXIMITY BASED INTERACTION WITH A SHORT RANGE COMMUNICATION ENABLED OBJECT as taught by Carter for the purpose of increasing the speed and efficiency with which information is shared with other users.
As to claim 2 and 12, Carter teaches identifying the sharable DA trigger includes detecting a second participant of the shared DA library as being within a target proximity of the end-user device (Carter discloses sharing digital assets with users for social purposes based on a proximity based event in [0011]-[0013], [0076], and [0077]).
As to claim 3 and 13, Carter teaches identifying the sharable DA trigger includes the first participant selecting a sharing option displayed during the camera session (Carter discloses the user may tap the short range communication enabled object in order to share a digital asset in [0086]-[0095]).
As to claim 4, Carter teaches the sharable DA trigger includes the first participant selecting a sharing option displayed during a previous camera session (Carter discloses the user may tap the short range communication enabled object in order to share a digital asset in [0086]-[0095]. Choosing a current session or a previous session would be design choice and would have been obvious to one of ordinary skill in the art.).
As to claim 5, Carter teaches identifying the sharable DA trigger is based on the first participant selecting sharing options before the camera session, the sharing options including a sharing start-time and a sharing end-time (Carter discloses using time delays to transmit digital assets in [0086]-[0095]. It would have been obvious to one of ordinary skill in the art to create a time window, including a start and end time, with which to share digital assets.).
As to claim 6, Stroila teaches identifying the sharable DA trigger is based on a location of the end-user device relative to a significant location (NOTE: Applicant defines a sharable DA trigger, by way of example, as time or location. Stroila discloses a list of events may be used to trigger the automatic photo capture. The photo capture may be triggered when a frustum or field of view of the camera in the mobile device intersects an expected location and time of an event. See [0020]-[0024]).
As to claim 7 and 16, Stroila teaches determining a sharing context metric for the end-user device (Stroila discloses as the mobile device is moved around a geographic area, a position circuit detects a current location and generates data indicative of the current location). See [0023]); and maintaining a (Stroila discloses automatic capturing of images as long as the user is within proximity to an event coordinate/location. The distance between the user device and the event location is based on an overlap threshold. See [0023]-[0024]).
Stroila and Kim fail to teach a share mode.
However, Carter teaches a share mode (See [0088]).
Before the effective filing date, it would have been obvious to one of ordinary skill in the art, to modify the teachings of Stroila and Kim to incorporate the PORTABLE ELECTRONIC DEVICE FOR FACILITATING A PROXIMITY BASED INTERACTION WITH A SHORT RANGE COMMUNICATION ENABLED OBJECT as taught by Carter for the purpose of increasing the speed and efficiency with which information is shared with other users.
As to claim 10 and 20, Kim teaches establishing multiple shared DA libraries for the first participant; and selecting between the multiple shared DA libraries based on the identified sharable DA trigger (Kim discloses allowing a user to access his or her personal image libraries and analyze the images he or she would like to share with other users. Other users have the ability to access a shared image library in order to view/download images other users have shared. See [0075]-[0077]. While Kim may not actually transfer images to a separate library, it should be noted that this is a design feature and it would have been obvious to transfer images to multiple separate storage areas accessible to other users instead of allowing access to a user’s personal image libraries.).
As to claim 11, claim 11 is a system claim reciting the following additional limitations to claim 1:
A display device;
One or more processors;
One of more sensors configured to capture digital assets (DAs);
A memory for storing program instructions for the one or more processors, where the instructions, when executed, cause the one or more processors to perform the remaining steps disclosed in claim 1.
Stroila teaches:
A display device (See [0061]);
One or more processors (See [0061]);
One of more sensors configured to capture digital assets (DAs) (Stoila [0052] and [0067] discloses a mobile device comprising a camera which uses LIDAR sensors to capture images.); and
A memory for storing program instructions for the one or more processors, where the instructions (See [0061]).
The remaining limitations in claim 11 have been analyzed with respect to claim 1 above.
As to claim 14, Carter teaches display the suggested sharing option on the end-user device as part of a camera preview of the camera session before capturing the DA and based on DA analysis of camera preview content (Carter discloses the user may tap the short range communication enabled object in order to share a digital asset in [0086]-[0095]. Choosing a current session or a previous session would be design choice and would have been obvious to one of ordinary skill in the art.).
As to claim 15, Carter discloses display the suggested sharing option on the end-user device after the camera session has ended and responsive to a deferred DA analysis performed at least a predetermined amount of time after a DA is captured or after the end-user device is placed in a predetermined state or condition, the deferred DA analysis producing a list of suggested DA to transfer from a personal DA library of the first participant to the shared DA library (Carter discloses using time delays to transmit digital assets in [0086]-[0095]. It would have been obvious to one of ordinary skill in the art to create a time window, including a start and end time, with which to share digital assets.).
As to claim 19, Carter teaches produce automatic sharing suggestions or automatic sharing rules for future DAs based on the DA analysis (Carter discloses one or more predefined conditions (i.e. sharing suggestions/rules) may be automatically determined, using the processing device, based on historical data corresponding to performance of the one or more steps. For example, the historical data may be collected, using the storage device, from a plurality of instances of performance of the method. Such historical data may include performance actions (e.g. initiating, maintaining, interrupting, terminating, etc.) of the one or more steps and/or the one or more contextual variables associated therewith. Further, machine learning may be performed on the historical data in order to determine the one or more predefined conditions. See [0044].).
Response to Arguments
Applicants’ arguments with respect to objections and rejections not repeated herein are moot, as the respective objections and rejections have been withdrawn in light of the instant amendments. Those arguments that still deemed relevant are now addressed below.
A. Applicant Argues:
The cited portions of Carter describe an "ad hoc" temporary sharing mode that can be enabled via different gestures, e.g., when two users with wearable wrist bands on come in close contact with one another. However, as shown in the reproduced portion of amended claim 1, above, the pending claims recite that an "onboarding request" is more than just tapping a UI icon or making a gesture to indicate a desire to share a digital asset with another user during a particular interaction. Instead, the claimed "onboarding request" further requires, inter alia, performing a DA analysis of previously captured DAs in a personal DA library of the first participant and transferring at least one DA from the participant's personal DA library to the shared DA library based on said DA analysis. As explained above, once a shared DA library is established by the particular end-user, all participants of the shared DA library will be able to access the shared DA library via cloud- based storage.
As is further explained, e.g., in Assignee's Specification at [0047], "The onboarding interval refers to an interval that follows an end-user establishing a shared DA library for the first time. During the onboarding interval, smart sharing options may be used to select which DAs in the personal DA library of an end-user will be transferred to the shared DA library. Also, smart sharing policies or preferences for future DAs may be selected by an end-user during the onboarding interval." Thus, the claimed onboarding request process is used to set up and establish a persistent shared DA library in cloud- based storage that is maintained separately from an end-user's personal DA library and updated over time according to configured "smart sharing options."
As may now be appreciated, Kim at [0075]-[0077] does not remedy the deficiencies of Stroila and Carter with respect to the amended claim 1 language. In particular, while Kim may disclose allowing a user to access his or her personal image library and analyze the images he or she would like to share with other users, e.g., in the form of a "shared folder" that other users may access and export images from, this is not the same as or similar to the claimed persistent shared DA library maintained in cloud- based storage that may be viewed and contributed to by any participants to said shared DA library, e.g., according to established smart sharing policies or preferences.
Response:
With respect to Applicant’s argument, the argument is not correct and the Examiner is not persuaded because Applicant is reading limitations from the specification into the claims. Specifically, Examiner finds no claim references to the “onboard request” setting up and establishing a shared DA library in “cloud-based storage” and updating the shared DA library over time according to “smart sharing options”.
Claims 1 and 11, as amended merely recite “transmitting, by an end-user device of a first participant, an onboarding request related to the shared DA library; and in response to transmitting the onboarding request, performing a DA analysis of previously captured DAs in a personal DA library of the first participant; transferring a DA from the personal DA library to the shared DA library based on the DA analysis”. However, Broadest Reasonable Interpretation (BRI) analysis would result in a much less restrictive “onboarding request” relating to “shared DA library”. For example, an onboard request could be any request sent from a mobile device which would allow a user to analyze his or her digital assets in order to determine which assets to share with someone else. Also, a “shared DA library” could be a form of storage related to sharing assets with additional users without the need for a virtual cloud.
As shown above, Kim clearly teaches a shared DA library (Kim discloses a shared library where users can post images to share and download images to use. See [0075]); and receiving the onboarding request (Kim [0075] discloses a user may choose to upload an image (i.e. onboard request) to one of these libraries if the user is expecting to reuse the image in different nodes, or across different semantic networks), performing a DA analysis of previously captured DA in a personal DA library of the first participant, and transferring a DA from the personal DA library to the shared DA library based on the DA analysis (Kim discloses allowing a user to access his or her personal image library and analyze the images he or she would like to share with other users. A shared library is where users can post (i.e. transfer) images to share and download (i.e. transfer) images to use. Other users have the ability to access a shared image library in order to view/download images other users have shared. See [0075]-[0077].).
Examiner suggests amending the claims to clarify that the “shared DA library” is being hosted on a cloud-based storage and also clarify what the “smart sharing options” are and how they are being utilized to update the shared DA library over time.
B. Applicant Argues:
Moreover, there is also no teaching or suggestion in Carter or Kim as to the remaining portions of amended claim 1 (i.e., from previous claim 9), i.e., related to displaying, by an end-user device of a first participant, a list of "suggested DAs" for the shared DA library based on the DA analysis-and then transferring at least some of the suggested DAs from the personal DA library of the first participant to the shared DA library based on the received inputs.
Response:
With respect to Applicant’s argument, the argument is not correct and the Examiner is not persuaded because Kim teaches displaying a list of suggested DA for the shared DA library based on the DA analysis; receiving inputs from the first participant regarding the displayed list of suggested DA for the shared DA library; and transferring at least some of the suggested DA from the personal DA library of the first participant to the shared DA library based on the received inputs (Kim discloses allowing a user to access his or her personal image libraries and analyze the images he or she would like to share with other users. A shared library is where users can post (i.e. transfer) images to share and download (i.e. transfer) images to use. Other users have the ability to access a shared image library in order to view/download images other users have shared. See [0075]-[0077].).
Specifically, Kim discloses allowing a user to look through (i.e. analyze) his or her personal photos (i.e. suggested DAs) residing on their mobile device, and select images (i.e. receive user input) which he or she would like to upload (i.e. transfer) to a shared library. Once uploaded, the shared library now contains images which other users. The users can choose which images form the shared library they would like to download.
The claims lack clarification in regards to what/who is generating the “suggested DAs”. It is also unclear in the claims as to whether the device or the user is performing the analysis. Also the “received inputs” is extremely broad and could encompass a vast amount of things including a user selection, uploading, downloading, etc. all of which are taught in Kim.
Examiner suggests further amendments to further clarify how the “suggested DAs” are being generated, who or what is performing the analysis, and what the received inputs are. Doing so would help advance prosecution.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JARED M BIBBEE whose telephone number is (571)270-1054. The examiner can normally be reached Monday-Thursday 8AM-6PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, APU MOFIZ can be reached at 5712724080. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JARED M BIBBEE/Primary Examiner, Art Unit 2161