DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments/Amendments
Applicant’s arguments with respect to claim 1, 6, and 7 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Objections
Applicant is advised that should claims 3-5 be found allowable, claims 8-10 will be objected to under 37 CFR 1.75 as being a substantial duplicate thereof. When two claims in an application are duplicates or else are so close in content that they both cover the same thing, despite a slight difference in wording, it is proper after allowing one claim to object to the other as being a substantial duplicate of the allowed claim. See MPEP § 608.01(m).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-13 are rejected under 35 U.S.C. 103 as being unpatentable over Siddique et al. (US 20240394676 A1, hereinafter “Siddique”) in view of Matute (US 20140359069 A1, hereinafter “Matute”) and Chow (US 20160078058 A1, hereinafter “Chow”).
Regarding claim 6, Siddique teaches:
An information processing apparatus comprising: one or more processors of a computer; a non-transitory computer-readable medium containing executable instructions which when executed by the one or more processors of the computer, causes the computer to perform operation comprising: (Siddique: ¶119, "The memory store 32 is used to store executable programs and other information and may include storage means such as conventional disk drives, hard drives, CD ROMS, or any other non-volatile memory means. . . The CPU 36 is used to execute instructions and commands that are loaded from the memory store 32")
receiving and storing virtual object information generated in a terminal by imaging a subject (¶148, "The data store 70 in an exemplary embodiment comprises a user database 80, an apparel database 82, a 3-D model database 84, and an environment database 86. The user database 80 in an exemplary embodiment is used to record and store information regarding a user of the system 10. Such information includes, but is not limited to a user's access login and password that is associated with the system 10. A user's profile information is also stored in the user database 80 which includes, age, profession, personal information, and user's physical measurements that have been specified by the user, images provided by the user, a user's history, information associated with a user's use of the system"; ¶113, "The three-dimensional models are herein referred to as user models or character models, and are created based on information provided by the user. This information includes, but is not limited to, any combination of: images; movies; measurements; outlines of feet, hands, and other body parts; moulds/imprints including those of feet, hands, ears, and other body parts; scans such as laser scans; skin tone, race, gender, weight, hair type etc.; high resolution scans and images of the eyes; motion capture data"; ¶114,"The online modeling system 10 in an exemplary embodiment comprises one or more users 12 who interact with a respective computing device 14. The computing devices 14 have resident upon them or associated with them a client application 16 that may be used on the model generation process as described below"; ¶118, "Reference is now made to FIG. 2, where a block diagram illustrating the components of a computing device in an exemplary embodiment is shown. The computing device 14, in an exemplary embodiment, has associated with it a network interface 30, a memory store 32, a display 34, a central processing unit 36, an input means 38, and one or more peripheral devices 40"; ¶119, " The Peripheral devices 40 may include, but are not limited to, devices such as printers, scanners, and cameras")
Note: The user interacts with a computing device 14 (terminal), uses the cameras to provide images (imaging a subject) and other information as described above (virtual object information) to the user database 80 in the data store 70. Therefore, the user database 80 receives and stores the information (virtual object information) generated from the computing device 14);
generating web page information for reproducing virtual object information on a web page (¶195, "Reference is now made to FIG. 10, where a sample image of a client application window 270 is shown (Note: webpage displaying virtual object information). In an exemplary embodiment, the client application 16 resident, or associated with the computing device causes a client application window 270 to be displayed to the user when the user model is being created. The client application can request and submit data back to the server. The protocol for communication between the application 16 and server 20 is the HTTP protocol in an exemplary embodiment. The application 16, in an exemplary embodiment initiates authenticated post requests to a PHP script that resides on the portal server and that script relays the requested information back to the application 16 from the server 20. People are comfortable with shopping on the internet using a browser and with monetary transactions through a browser. In order to provide the user with a rich experience, a rich 2D and/or 3D environment is desired. Such an environment can be a computational burden on the portal server. To reduce the computational load on the portal server, the computationally intensive rendering aspects have been pushed to the client side as an example. In an exemplary embodiment, this computational efficiency can be achieved through the use of a local stand-alone application or a browser plug-in, or run within a browser, or a local application that interacts with the browser and portal server 20"; ¶198, "To be able to better illustrate the how the user may make modifications to the user model in an exemplary embodiment, reference is made now to FIGS. 12 to 13. Reference is now made to FIG. 12A, where a sample measurement window 290 is shown, in an exemplary embodiment. The measurement window 290 allows the user to specify empirical data that is used to generate or modify the user model. The user is able to specify the measurements through aid of a graphical representation that displays to the user the area or region for which a measurement is being requested. In addition. videos and/or audio may be used to assist the user in making measurements. When a user does not specify the measurements that are to be used, default values are used based on data that is computed from the respective images that the user has provided. Measurements associated with a user's waist have been shown here for purposes of example as the user may specify measurements associated with other areas of their body as described above. The user may specify various modifications of the user model that are not limited to body size measurements. Such modifications may include, but are not limited to, apparel size, body size, muscle/fat content, facial hair, hair style, hair colours, curliness of hair, eye shape, eye color, eyebrow shape, eyebrow color, facial textures including wrinkles and skin tone"
Note: Siddique fig. 12-13 illustrates a generated web page displaying a generated model (virtual object) and its anatomical measurements (virtual object information). The web page information is relayed to a browser via PHP script that resides on the portal server (Siddique: ¶195). When a user accesses the webpage using a URL associated with their respective homepage(Siddique: ¶149), the webpage with the virtual object information is then reproduced in the user's browser);
Siddique further teaches a method where a user can grant access to their profile or home page to other users (Siddique: ¶ 140, ". . .If the user for whom the gift is being purchased already has a user account/profile available in system 10, then their user model may be accessed by the gift-giver upon receiving permission from the user for purposes of testing goodness of fit. If a user wishes to access fit or other information or the user model of a friend, the friend would receive a notification that the specific information has been requested by the user. . ."; Siddique: ¶239, ". . .The user can grant access to this page to other users by setting permissions. . .").
However, Siddique fails to teach the analogous art Matute teaches.
Matute teaches transmitting to a sharing destination access information to the web page information, upon receiving the terminal’s request for sharing and information about the sharing destination (Matute: ¶3, "In accordance with the invention there is provided a method comprising associating a URL and a resource, the URL for accessing the resource; associating a smartphone with a recipient; providing from a first user to a recipient the URL; receiving a request for access to the resource relying upon the URL, the request received via a communication network; upon receiving the request for access to the resource, transmitting from a server to the smartphone a push notification; receiving a reply based on the push notification transmitted to the smartphone; and in dependence upon the reply, allowing access to the resource via the communications network).
It would have been obvious to a person having ordinary skill in the art (PHOSITA) before the effective filing date of the claimed invention to modify Siddique and implement Matute’s method of transmitting to a sharing destination access information to the web page information, upon receiving the terminal’s request for sharing and information about the sharing destination to allow users to rely on the internet as a communication tool for conversation (Matute: ¶ 2).
Although Siddique teaches that user models may also be programmed to reflect changes over time such as ageing, weight loss/gains etc. as disclosed in Siddique paragraph 204, however still, the combination of Siddique and Matute still fails to teach wherein the plurality of pieces of virtual object information are arranged juxtaposed based on accompanying information.
The analogous art Chow teaches:
wherein the plurality of pieces of virtual object information (Chow: Fig. 2 ref 103, 105, 106, 108. ¶15, “. . .child frame images. . .)
are arranged juxtaposed (Chow: ¶15, “. . .juxtaposition. . .)
based on accompanying information (Chow: ¶12. “The subsequently captured images may be displayed juxtaposed to the base image for convenient comparison of time lapsed changes to the event”; NOTE: in reference to Chow fig. 2, the child images which is the claimed virtual object information are accompanied and arranged by the date and time information.)
It would have been obvious to a person having ordinary skill in the art (PHOSITA) before the effective filing date of the claimed invention to combine Siddique, Matute, and Chow’s to display objects in a juxtaposed format. The reason for doing so is to “solve the problems associated with organizing electronic files for comparison of objects within the files by improving the process of organizing images of an event or changing object” (Chow: ¶4).
Regarding CRM claim 1, CRM claim 1 is drawn to the CRM corresponding to the executable instructions of using same as claimed in the apparatus of claim 6. Therefore, CRM claim 1 corresponds to the executable instructions in the apparatus of claim 6, and is rejected for the same reasons of obviousness as used above.
Regarding method claim 7, method claim 7 is drawn to the method corresponding to the executable instructions of using same as claimed in apparatus claim 6. Therefore, method claim 7 corresponds to the executable instructions in the apparatus of claim 6, and is rejected for the same reasons of obviousness as used above.
Regarding claim 2, depending on claim 1,
The combination of Siddique, Matute, and Chow teaches the non-transitory computer-readable medium according to claim 1,
wherein the accompanying information includes at least one of measurement data, profile information, or generated timing of the virtual object information (Chow: ¶13, “Referring now to FIGS. 1-3 and FIG. 5. . . Child frames 105, 106, and 108 may be tagged with a name related to the event in base image 103, date and time stamp. . .; NOTE: In reference to fig. 2, the child images is accompanied by time and date stamp)
Regarding claim 3, depending on claim 1,
The combination of Siddique, Matute, and Chow teaches the non-transitory computer-readable medium according to claim 1,
the operation further comprising extracting information related to at least one of a size and a weight of the subject, as a change information, from the plurality of pieces of the virtual object information generated by imaging the subject, and outputting the change information (Siddique: ¶152, "As users browse apparel, the system informs the user about how well an apparel fits, if the apparel is available in a given user's size and the specific size in the apparel that best fits the user"; ¶150, "Reference is now made to FIG. 6A, where the steps of a detailed model generation method 110 are shown in an exemplary embodiment. The model generation method 110 outlines the steps involved in generating the 3-D user model. Method 110 begins at step 111, at which the user provides data to the system 10. . . and user specific info such as the age or age group, gender, ethnicity, size, skin tone, weight of the user. User data may be imported from other sources such as social-networking sites or the virtual operating system described later in this document; ¶198, "To be able to better illustrate the how the user may make modifications to the user model in an exemplary embodiment, reference is made now to FIGS. 12 to 13. Reference is now made to FIG. 12A, where a sample measurement window 290 is shown, in an exemplary embodiment. The measurement window 290 allows the user to specify empirical data that is used to generate or modify the user model. The user is able to specify the measurements through aid of a graphical representation that displays to the user the area or region for which a measurement is being requested. In addition. videos and/or audio may be used to assist the user in making measurements. When a user does not specify the measurements that are to be used, default values are used based on data that is computed from the respective images that the user has provided. Measurements associated with a user's waist have been shown here for purposes of example as the user may specify measurements associated with other areas of their body as described above. The user may specify various modifications of the user model that are not limited to body size measurements. Such modifications may include, but are not limited to, apparel size, body size, muscle/fat content, facial hair, hair style, hair colours, curliness of hair, eye shape, eye color, eyebrow shape, eyebrow color, facial textures including wrinkles and skin tone; ¶253, "Reference is now made to FIG. 48F where the ‘user model tools’ space is described. Here the user can access make changes and manage their 3D simulated user model and model profile information (1212)"
Note: System 10 extracts the "size" information in order to inform the user how well an apparel fits. In addition, the size and the weight information are also extracted and used as a change information (when these parameters change, so does the model output) to generate a detailed model as described in Siddique: ¶150". The output information is the generated model. Further, fig. 48F illustrates a webpage which displays (output) profile information features including "MY MEASUREMENTS", wherein the measurements include sizes (apparel, hat, collar, shoe, etc.) and weight (¶157)).
It would have been obvious to a person having ordinary skill in the art before the effective filing date that by pressing or clicking the "MY MEASUREMENTS" and/or "MODEL PROFILE" link will lead to a webpage that may display (outputting the change information) the measurements, including the size and weight of the subject (model). In addition, fig. 36 illustrates a webpage displaying the "size and weight" (697) of a subject item, thus outputting a size and a weight.
PNG
media_image1.png
600
766
media_image1.png
Greyscale
Siddique fig. 36
PNG
media_image2.png
675
911
media_image2.png
Greyscale
Siddique fig. 48F
Regarding claim 4, depending on claim 1,
The combination of Siddique, Matute, and Chow teaches the non-transitory computer-readable medium according to claim 1,
the operation further comprising extracting information related to at least one of a size and a weight, as a change information, from the plurality of pieces of the virtual object information generated by imaging the subject (Note: this limitation is exactly the same as the limitation claimed in claim 3 and is rejected with the same reasons of obviousness as used above)
and outputting a notification when based on the change information, the size matches, exceeds, or falls below a predetermined size or is expected to match, exceed, or fall below the predetermined size (Siddique: ¶134, "System 10 recommends stores to visit based on specific user information such as profession, gender, size, likes/dislikes etc. For instance, for a short female, the system can recommend browsing petite fashion stores. Based on a user's apparel size, the system can point out to the user if a product is available in the user's size as the user is browsing products or selecting products to view. The system may also point out the appropriate size of the user in a different sizing scheme, for example, in the sizing scheme of a different country (US, EUR, UK etc.). In suggesting appropriate sizes to user in products that may vary according to brand, country, and other criteria, the system also takes into account user fit preferences"; ¶152, "As users browse apparel, the system informs the user about how well an apparel fits, if the apparel is available in a given user's size and the specific size in the apparel that best fits the user"
Note: If system 10 recommends a store to visit, then it extracts information including the size for analysis to support its recommendation. If system 10 points out to the user if a product is available in the user's size, then the system 10 effectively notifies the user if a product matches the size of the user. Also, in reference to Siddique: ¶ 152, if the system informs the user about how well an apparel fits, then it compares it to the user’s size data, and informs (notify) the user whether if it fits well (match), may also inform if the apparel is too big (exceeds) or too small (falls below)).
PNG
media_image3.png
474
758
media_image3.png
Greyscale
Siddique fig. 12B
Regarding claim 5, depending on claim 1,
The combination of Siddique, Matute, and Chow teaches the non-transitory computer-readable medium according to claim 1, wherein the operation of generating includes generating the web page information (see claim 1 rejection and notes)
for reproducing an image taken on a terminal and the plurality of pieces of virtual object together on a web page (Siddique: fig. 12B is an image of a sample constructed photorealistic model.
Note: Siddique fig. 12 shows a webpage displaying a constructed model of a user. The constructed model with generated anatomical landmarks such as the circumference of the head and neck, width of an eye, etc. as described in Siddique ¶157 (plurality of pieces of virtual object together on a webpage) is generated based on information from the computing device 14 (taken on a terminal) including images captured and provided by the user).
Regarding claim 8, depending on claim 1,
The combination of Siddique, Matute, and Chow teaches the non-transitory computer-readable medium according to claim 1,
The limitation, the operation further comprising extracting information related to at least one of a size and a weight of the subject, as a change information, from the plurality of pieces of the virtual object information generated by imaging the subject, and outputting the change information is exactly the same limitation as claimed in claim 3. Therefore, claim 8 has exactly the same limitation as claimed in claim 3 and is rejected for the same reasons of obviousness as used above.
Regarding claim 9, depending on claim 1,
The combination of Siddique, Matute, and Chow teaches the non-transitory computer-readable medium according to claim 2.
The limitation, the operation further comprising extracting information related to at least one of a size and a weight, as a change information, from the plurality of pieces of the virtual object information generated by imaging the subject, and outputting a notification when based on the change information, the size matches, exceeds, or falls below a predetermined size or is expected to match, exceed, or fall below the predetermined size is exactly the same limitation as claimed in claim 4. Therefore, claim 9 has exactly the same limitation as claimed in claim 4 and is rejected for the same reasons of obviousness as used above.
Regarding claims 10-12, depending on claims 1, 3-4, respectively,
The combination of Siddique, Matute, and Chow teaches the non-transitory computer-readable medium according to claim 1, 3-4 respectively.
Claims 10-12 limitation, wherein the operation of generating includes generating the web page information capable of reproducing an image taken on a terminal and the one or the plurality of pieces of virtual object together on a web page is exactly the same limitation as claimed in claim 5. Therefore, claims 10-12 has exactly the same limitation as claimed in claim 5 and are rejected for the same reasons of obviousness as used above.
Regarding claim 13, depending on 1,
The combination of Siddique, Matute, and Chow teaches the non-transitory computer-readable medium according to claim 1, wherein the plurality of pieces of virtual object information correspond to images of a same person (Chow: ¶13, “. . .a child frame 105 may be created (230) by taking a picture of the same object or event which is stored as an electronic file. . .) captured at different times (NOTE: Chow Fig. 2 shows images of the same child Tony 103, 105, 106, 108 with accompanying date and time information).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PATRICK GALERA whose telephone number is (571)272-5070. The examiner can normally be reached Mon-Fri 0800-1700 ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, King Poon can be reached at 571-270-0728. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PATRICK P GALERA/
Examiner, Art Unit 2617 /KING Y POON/Supervisory Patent Examiner, Art Unit 2617