Quantcast
Channel: SCN : All Content - BOPF Application Framework
Viewing all 309 articles
Browse latest View live

what are the differences b/w bopf and crm

$
0
0

Hi,

I would like to know the differences between crm and bopf?.

what is  the query object(crm) and queries(bopf)?


What are the differences between Query and Retrieve in BOPF?

$
0
0

Hi,

i would like to know the differences between query and retrieve. I heard that query hits  db and Retrieve hits buffer.

Creating a Node for EHHSS_INCIDENT_ACTION thru EHHSS_INCIDENT

$
0
0

This question is related to the EHSM module but since it is related to BOPF framework so I though of posting it here.

 

I have the reference of the ROOT Node for EHSS_INCIDENT Object and I want to attach a task to this.

I also wanted to attached an involved person which I managed by using the method.

 

lo_inv_pers = io_root_node->get_subnode_by_key( if_ehhss_inc_c=>sc_association-root-person_involved ).

lr_s_person_involved ?= lo_inv_pers->create_empty_row( ).

 

 

However since EHHSS_INCIDENT_ACTION is not a direct sub-node of the EHHSS_INCIDENT object hence I tried to use the method

lo_task_node = io_root_node->get_node_by_assoc( if_ehhss_inc_c=>sc_association-root-ehhss_incident_report ).

 

but the next statement

lr_s_task ?= lo_task_node->create_empty_row( ).

 

Errors our because the structure of the ehhss_incident_report does not get populated.

 

Any suggestions on this ?

Issue in Retriving of Dynamic populated dropdown filed

$
0
0

Hello ,

 

In my BOPF application I have  two drop down field say  dropfield1 and dropfield2.  I have check in my FBI view exist class in method /BOBF/IF_FBI_VIEW_EXITINTF_RUN~ADAPT_DATA  and as per that check I am defaulting one of the value in the dropfield2.

 

This dropfield2 which was populated dynamically through the code part  needs to  be retrieved in my BOPF action .The IO_READ->retrieve  successfully retrieve data  table (et_data) but  the field which is dynamically populated  is blank in the et_data. Dynamic value is one of the dropdown list value.

 

Where as when I manually dropdown the field dropfield2 and select a dropdown list value at that time IO_Read->Retrive  returns  the data table(et_data) with all the fields filled including the manually dropdown value.

 

Do I need any other configuration for dynamic population in LIST_GUIBB or do I wrongly default the values in ADAPT_DATA?

 

While debugging I can see the dynamic default dropdown value in CT_DATA  of View exit as well as in Feeder class.

 

Let me know your thoughts and input on the issue.

 

Regards,

Partha

BOPF model for SAP business partner

$
0
0

Hi all,

 

I started playing with the BOPF again. The reason is that we are about to develop a set of custom APIs to cread, update and read some common SAP business object like business partner or contract account. The goal is to have in SAP IS-U some standardized APIs similar to the BOL/GenIL in SAP CRM for common business objects. So I thought this sounds like a good use case to apply BOPF.

 

Has anyone any experiences using the BOPF to read, create and update business partners? So far I had a look at the /BOBF/CONF_UI and it seems that the provided object model for business partner is only a stub (e.g. I can't create BPs using the test UI, some important data like bank data is missing). How difficult would it be to create a more complete BOPF model of the SAP business partner? Is it worth the effort?

 

Best,

Christian

Re-use append structures for field enhancements - possible?

$
0
0

Hello everyone,

 

when enhancing a BO node structure with custom fields, you have to go into the extension include structure and create an append structure. However, it is apparently not possible to re-use a previously defined append structure, you always have to create a new one. Consider a use case in TM where you need identical additional fields in the TOR object as well as, for example, in the TCC object (and maybe others). Wouldn't it be nice to be able to reuse one structure for all the objects?

 

Of course, I could include one and the same structure in all of the append structures. But what are the reasons that I have to go this extra step? Or is there a different way?

How to create dependent node attributes node in BOPF?

$
0
0

Hi,

how to create the dependent node attributes using host object. Can anybody tell  class name to access the dependent node.

Thanks.

BOPF Short Dump error on Custom field

$
0
0

Hi,

I am getting short dump error on EHSM -> opening Draft incident scenario,  BOPF 'MESSAGE_TYPE_X' on BASIC_INFO_ALL node.

this error is only happening in production, error is happening because the custom structure fields I am seeing in the code before getting dump,  this structure EHHSSS_INC_BASIC_INFO_ALL  it brings an extra custom field which is not exist any more in any structure in Production. I had recently moved transport to production to remove this custom field as part of other changes. But not able to figure out why this field is coming before update when trying to open on DRAFT incident, not an save/create incident and why is not happening in R3Q/D.

 

Do i have to regenerate th BOPF, any inputs? appreciate your help.

 

 

Thanks

Mani

 

Manikandan Nagarajan


Creating Dependent object for Header and Item

$
0
0


Pleae let me know if we can create dependent/delegated oject like attacment folder for both Header(ROOT) and Item( NODE). I am running into issues while creating for both. If YES, what is the best approach for this.

custom BO : Insert additonal only in item node.

$
0
0

In  a custom BO, we have header node and item (status) node (relationship 1-n). For the first time, While inserting data i fill both header and item node and able to populate data correctly.

 

 

Now, My requirement is to add only items (status) and while doing so i am getting an error. Please find the below code. Any suggestion/document will help . Thanks.

 

 

         CREATE DATA  lref_status.

 

 

        lref_status->key = /bobf/cl_frw_factory=>get_new_key( ).

        lref_status->rdoc = ls_header-rdoc.

        lref_status->st_date = sy-datum.

        lref_status->st_time = sy-uzeit.

        lref_status->status = '02'.

        lref_status->message = 'Status record updated'.

        ls_mod-node = zif_bopf_ief_c=>sc_node-ief_status.

        ls_mod-change_mode = /bobf/if_frw_c=>sc_modify_create.

        ls_mod-source_node = zif_bopf_ief_c=>sc_node-ief_header.

        ls_mod-source_key = ls_header-key.

        ls_mod-key = lref_status->key .

        ls_mod-data = lref_status.

        APPEND ls_mod TO lt_mod.

        CLEAR : ls_mod , lref_status .

 

 

    TRY.

        CREATE OBJECT lo_driver.

 

 

 

 

    CALL METHOD lo_driver->mo_svc_mngr->modify

          EXPORTING

            it_modification = lt_mod

          IMPORTING

            eo_change       = lref_change

            eo_message      = lref_message.

 

 

       "Check for errors:

        IF lref_message IS BOUND.

          IF lref_message->check( ) EQ abap_true.

            lo_driver->display_messages( lref_message ).

            RETURN.

          ENDIF.

        ENDIF.

 

 

        "Apply the transactional changes:

        CALL METHOD lo_driver->mo_txn_mngr->save

          IMPORTING

            eo_message  = lref_message

            ev_rejected = lv_rejected.

 

 

        IF lv_rejected EQ abap_true.

          lo_driver->display_messages( lref_message ).

          RETURN.

        ENDIF.

 

 

      CATCH /bobf/cx_frw INTO lx_bopf_ex.

        lv_err_msg = lx_bopf_ex->get_text( ).

        WRITE :/ lv_err_msg.

    ENDTRY.

 

 

 

 

Error :

 

 

Category               ABAP Programming Error

Runtime Errors         DATREF_NOT_ASSIGNED

ABAP Program           /BOBF/CL_FRW==================CP

Application Component  AP-RC-BOF-RNT

Date and Time          17.07.2015 08:41:26

 

 

  254         READ TABLE lt_new_node

  255           WITH KEY node = <ls_mod>-source_node

  256                    key  = <ls_mod>-source_key

  257           TRANSPORTING NO FIELDS.

>>>>>         IF sy-subrc <> 0                       AND

ABAP to the future – my version of the BOPF chapters – Part 1

$
0
0

It rarely happens that I’m really looking forward to a release of a book. The announcement of “ABAP to the Future“ by Paul Hardy was such an occasion. Reading the TOC, I was 100% positive that Paul picked the right topics which were needed in order to fulfil the promises of the title.

Particularly I was excited about the BOPF chapter: I have given a couple of trainings, been to TechEd with the topic, promoted it to management as well as fellows and all the time, the same questions are being raised: “Why have I not heard about it before?”, “Is there an official training at SAP?” and “Is there a book about it?”. While I still can’t answer the first one, the answer to the second one is “yes, WDEBOF at least gives a rough overview”. I always answered “not yet” to the book-question and was excited to be able to answer it with “yes, and the title of the book is very appealing to management as well: ABAP to the future”.

I finally started to read it and have to admit that I don’t agree to a lot I read. Sharing this opinion with fellows, I was encouraged to write my comments on SCN so that we can discuss them.

Preface: This is my opinion based upon my experience and architectural as well as programming style. @Paul: You have done a great job collecting and preparing those topics. Thanks for including BOPF in it. I hope you don’t mind my criticism and the style I chose. I would love to read your comments on my writing and discuss on the each and every aspect. Often, there’s no “right” solution, so I guess that other readers joining the discussion would also benefit from it.

Instead of picking all the small pieces which I would like to comment on, I chose to write a partially alternative version. This is of course more challenging and exposes my poor writing (I can’t keep up with Pauls entertainment for sure, I’m German and Germans tend to stick to facts instead of entertaining the reader, which is a pity...), but I hope that third persons can understand it more easily. Still I will not repeat parts which I agree with. You’ll have to read the chapters in Pauls book yourself and create your own picture which suits your experience and architectural style. Anyway, it’s worth buying the book

So here we go:

 

8.1 Defining a Business Object

A BOPF model is a representation of a real-world object. With a very low representational gap, the model which is designed in a dedicated modeling environment in the ABAP backend (in fact, there are multiple designtimes based upon the same model, we’ll come to that later). This so-called “outside-in” approach doesn’t only make it easier to consume the model, but also to discuss as a developer with domain-experts with the same vocabulary. But be careful: The business object is agnostic to the consumer (e. g. doesn’t know anything about the UI). Deriving a model from a UI can render your model inflexible or in-appropriate for other consumers. Modeling a business object is very similar to modeling a UML class diagram: You think about how the things work and use your brain and methodology in order to derive classifiers (classes) and behavior (methods) – if you don’t have a methodology which helps you to decide what makes an entity, I highly recommend reading Craig Larman.

 

8.1.1 Creating the Object

Before we head to the system, let’s make a plan. How does a monster look like? A monster is composed of many parts (UML: classes) which are interconnected (UML: by associations). Those which are connected with a dependency that the parts can’t exist without the monster itself (UML: composition associations) will form our monster business object. An analysis of our usecases has resulted in the monster itself needing a name and a creator. It has a head, it can even have multiple heads. About the legs and arms we might bit a bit uncertain: When is an arm an arm, what makes the difference to a leg? We might decide for an entity “extremity” instead. All these questions might not occur when deriving the monster from a UI. We might instead have come up with a model where we’ve got only one head (as an attribute of the monster) and six attributes leg1..6. Now let’s move to the BO-Builder. I recommend to directly utilize the BOPF Expert tool (transaction BOBX) unless you’re got EhP7, SP8 and are happy to be able to utilize the BOPF in Eclipse plug-in.

All entities of a model are technically encoded as GUIDs. When programming with BOPF, we technically pass the GUID in order to address an aspect of a business object which we’re talking to. This is necessary, as the command pattern and the service layer pattern are fundamentals of the BOPF architecture (we’ll talk about this in more detail later). Just one word upfront: In the beginning it might feel a bit cumbersome, after but you get used to it very easily as the pattern is ubiquitous. In order to make the code more readable (and writable) for the human, BOPF generates a so-called constant-interface which has a human-readable constant containing the GUID. During debugging however, you are confronted with the GUIDs. Hint: There are various ways in order to translate the GUID back to the model-entity. The easiest one is to press control+F in the BOPF designtime and paste the GUID. The UI will navigate to the corresponding model-entity (at least in the SAP-GUI-based tools). When debugging, you might not want to leave your debugging environment and can utilize the debugger script /BOBF/TOOL_DEBUGGER_SCRIPT.

 

8.1.2 Creating the root node

The wizard guides you through the creation, you basically have to select proper names. The so-called root node is the topmost entity upon which existence all other nodes depend on (as Paul wrote comparable to the top level of an XML document). I recommend you not to choose any other name than ROOT, as the ROOT kind of represents the object itself. Naming it like the object (e. g. MONSTER) is redundant (as all node-names are unique only in the context of the BO) and giving it a name (e. g. HEADER) makes it more difficult to identify as top-level node. BOPF also creates associations for all lower level nodes to the top-level-node which are called “TO_ROOT”, so it would be nice in my opinion if TO_ROOT also found the ROOT node.

The structure of each node contains two major includes: One for the persistent structure and one for the transient structure. The persistent structure will be included into the database table which BOPF will generate for you. The Transient structure contains information which can be derived at runtime. It’s a bit tricky to distinguish between transient attributes and UI-control-information. I recommend to include attributes into the transient structure which are relevant to the business (and don’t think about the UI). For a monster, it is necessary how many heads it’s got, as it can bite once with each head (assuming only one mouth per head). Thus, we should have an attribute “number of heads” on our ROOT node. However, this can be easily derived by counting the number of HEAD-instances for each monster. It doesn’t need to be persisted, but it could be also stored if updated after each head-creation or deletion. Whether an attribute is transient or persistent is transparent to the consumer, as both structures are included into the “combined node structure” which is being used for consumption. You can even move attributes between the two structures without disruption. Information which is solely necessary for the consumption by a human (via a user interface) like language-dependent texts to a coded value should neverbe part of the model, but should be added on an architectural layer closer to the UI (I call this layer upon which the UI-controller is based upon “service adaptation”, but this aspect is not part of this chapter). My recommendation: If you can derive an attribute but want to search (the database) for it or if the derivation is very expensive, include it in the persistent structure. Else, give the transient structure a try.

The names for the runtime structures will be proposed by some funky algorithm which resides in BOPF. I recommend to just keep the prefixes and suffixes as they are (and only change the semantical part if necessary), as it is very beneficial if the naming is the same within your development team. And there’s always a laze team member which just goes for the proposal If you’ve got a deviating naming convention in your company, feel free to extend the BOPF code which does the proposal. Doing this will help you tremendously to learn how BOPF itself works – and might open your eyes a wide.

 

8.1.3 Creating the subnodes

Our monster has got heads and extremities. For the sake of simplicity, we’ll just ignore legs and arms now and simply create an entity to store information about the monster’s various heads. As the heads depend on the existence of the monster itself, this will be a subnode to the ROOT node (in contrast to “FINGER” which depends on the existence of the node “EXTREMITY” and would be a subnode of the subnode, if we modelled it). Also note that the node-name is in singular! How many heads a monster may have is a property of the association between ROOT and HEAD (its cardinality) and might even change as the system evolves (usually from one to many ). Also, we don’t prefix the node-name with the business object name itself, as the node is anyway context-dependent on the BO.

As I wrote above, a BOPF Business Object comprises all the entities (as nodes) which are connected by compositions. As we create the subnode, BOPF implicitly also creates a composition association along with it. All associations in BOPF are directional. The composition, which usually has the same name as the target node leads from ROOT to HEAD. Another association which leads from HEAD to ROOT is also created implicitly: it is called “TO_PARENT”. This association exists for all the subnodes of our model, just like “TO_ROOT” which always associates the ROOT node from the subnode. While TO_PARENT and TO_ROOT always have a cardinality of 1, the compositions’ cardinalities have to be modeled.

Note that – just like in the UML – nodes and associations are different entities of the metamodel. In the constant-interface, you can also experience this easily: There are separate constants for zif_monster=>sc_node-head and zif_monster=>sc_association-root-head. Also, there can be multiple associations between the same nodes: E. g. we can imagine one head being a preferred one (which takes most of the tasks). This dedicated head would still be a head (and not a separate node in the model), but could be represented by an own association from ROOT to HEAD: “PREFERRED_HEAD”. There are various types of associations, including associations to nodes of other business objects. We’ll only confront our brains with compositions now, all the other associations can be used in the same way.

In UML, Nodes are classifiers (such as classes in ABAP OO) and as such, they are also the anchor for behavior: While object oriented languages differentiate behavior based on their visibility (private, protected, public methods), BOPF also differentiates semantically (Determinations, actions, validations, queries). What this all is and how this relates to known concepts, we’ll look at in a practical example.

8.2 Using BOPF in order to write a DYNPRO-style program

Also in the ancient DYNPRO-times, it was (theoretically) possible to implement a MVC-pattern. The DYNPRO-Events PBO and PAI could sever as entry point for a controller which then delegates the actual logic to a model. However, there are only a few samples I know where this has been done consequently (you might debug a PBO and PAI of the ABAP workbench (SE80) or the BOPF builder (BOBX) in order to get an impression how this could be done.

Please do note that there in my opinion, there is no “BOPF equivalent of a PAI”, as the PAI is part of the UI or controller layer (even I can argue with myself what it’s exactly part of), but it’s surely not meant to be part of the model which is where BOPF resides. In this chapter, we will consume BOPF (from the UI layer) as well as provide business logic (such as checks on the input’s sanity). The interfaces and patterns interfaces for service consumption and provisioning look very similar, which is one of the strengths of the patterns used.

8.2.1 Where is my model class?

Short answer: There is no need for a dedicated model class. Be brave, move on.

The longer version: When I first encountered BOPF and heard this it was an object model, I was desperately looking for a model class which I could instantiate representing the instance of my real-world-object. Without knowing it, I implied a domain model architecture. However, this is not the case with BOPF: BOPF is built to leverage the strengths of ABAP and meant to be used also for mass-data-processing. Due to the fact that instantiation of an ABAP class is quite an expensive thing and the benefits of a table as first-level-citizen of the language, the BOPF inventors decided that a service layer with command-pattern-based methods would suit best. A service layer means that there are a couple of defined core-services which are offered by a service manager which have to be provided by all entities. The interface for those services (/BOBF/IF_TRA_SERVICE_MANAGER) is agnostic to the actual entity which is accessed through it. As most managers, the service layer validates the contract, doesn’t really add a lot of semantical value and delegates the consumer’s request to the implementation of the entity. The entity which is being addressed is part of the signature of a service façade. You can compare this e. g. to the HTTP-methods, where the resource which shall be addressed (URI) is a parameter of the actual method (e. g. GET). For each entity, the same interface is instantiated in order to manage one business object (the one to manage is passed as parameter during instantiation). At runtime, we will operate with a monster-manager. The most familiar services are the so-called CRUD-operations. But there are a couple of other services as well, such as evoking behavior (executing an action in BOPF language). Reading an entity via a service layer looks something like this:
monster = monster_manager->retrieve( node = HEAD key = 4711).

As the signature only alows the framework to know at runtime which entity (node data) shall be transported, all the data transported can’t be typed. The same applies for example to parameters of an action:
monster_manager->do_action(
node = ROOT
key = 4711
action = eat
parameters = new monster_eating_parameters( number_of_crackers = 5))
.

Even from these simplified examples you can guess that those patterns given, method calls occupy a lot of space on your screen. So why not build a wrapper (helper class) around it in order to shorted code? From an architectural perspective, there is no semantical value added by a wrapping model class. It shall only simplify code. From a development perspective, you’ll end up with a simplification which – in the beginning – may cover 80% of the usages, but time over time you’ll end up extending and extending the signatures of your helper class, add a lot of methods and in the end will have an even more complex (but manually maintained) artifact which doesn’t really make it more easy anymore. And there will always be developers that just consume the service layer directly as it just feels more appropriate in that moment. A generic simplification is also likely to ignore performance tunings which you could have had if you didn’t default a parameter in your simplification. From a QM-perspective you might also give up a major benefit of the pattern (if you create dedicated typed model classes for each entity): The way you talk to business objects is the same across all business objects. Whether you want to read a monster’s leg data or the engine-information of a rocket: You don’t have to learn new signatures if you know how to talk to the manager. I cannot go into detail about all the negative aspects and anyway, an encapsulation might really make your code shorter in the end. I can’t stop you from doing it your style anyway, but I can tell you that during my seven year experience coding real applications in BOPF, I would not go for any type of wrapper anymore.

Finally, if you still are keen to have an encapsulation, then at least generate it as typed access class. You don’t even need to write the generator itself: The BOPF team had this idea as well, but it didn’t make it into the official features (due to the above reasons, I guess ). You can still find it in the internal full-blown-modeling environment in the extras menu though.

ABAP to the future – my version of the BOPF chapters – Part 2

$
0
0

8.2.2 Accessing instances of a Business Object

As we imagine to enter our application, there are typically two different UI-patterns: After a selection-screen, a list of instances matching the criteria is displayed, one row representing one instance of the entity. On the button-bar, we’d be offered to edit an existing instance or to create a new one. Alternatively, we could also just have to enter an identifier which is a human-readable semantical key. The next screen would be used in order to read the current data and edit it or could be used to create an instance with the corresponding ID (optionally with default values).

Identifying instances

As written before, a business object node is the model-part which corresponds to a UML class and thus carries the data of the actual instances. In BOPF, each of these instances is identified with a – tada – GUID. This technical key does not need to be modeled: While generating the combined structure which includes the persistent as well as the (optional) transient information, BOPF also includes a technical structure, the so-called key-include. It contains not only the instances GUID (KEY), but also the PARENT_KEY which is the KEY of the parent node (is initial for root-nodes) as well as the ROOT_KEY (in case of a root-node-instance, the ROOT_KEY and the KEY carry the same values). This key-include is used by the framework in order to resolve compositions as well as their reverse (TO_PARENT) and TO_ROOT, but of course also can be interpreted in business logic.

All semantical data including identifiers are modeled as attribute. Based on these attributes, two core-services exist in order to get the KEYs for an instance: QUERY and CONVERT_ALTERNATIVE_KEY. Once you know the key, you can feed it to the core-service RETRIEVE in order to get the actual data.

Let’s have a look at QUERY first, as it’s the simpler one. A query is a modeled artifact which resides at a node (the “assigned node”). Based upon (multiple) query parameters, a set of instances of the node at which the query resides is being returned (precisely the corresponding KEYs). The query contract allows to apply the well-known select-options for each attribute (including BT and CP). There are two types queries which do not need to be implemented, but which can be answered by the framework itself: The node-attribute-query SELECT_BY_ELEMENTS is a query where the parameters match the persistent node structure, the SELECT_ALL-query is a query without any parameters. Note that the query-names are not unique within the model, but only within the context of the node: A SELECT_BY_ELEMENTS at the ROOT node will return keys of the ROOT-instances, the SELECT_BY_ELEMENTS at the HEAD node will return HEAD-keys matching the criteria (potentially of multiple monsters). All queries adhere to the implied contract and have to support paging as well as the restriction to a set of instances upon which is queried (see parameter “is_query_options”). I believe that “QUERY” feels very familiar for most ABAP developers as it kind of wraps an SQL-query (like a prepared statement). But there is one pitfall when using it in transactional applications: Just like any select-statement, only persisted data can be returned. The transactional buffer (some internal member table which holds the created and changed instances) is ignored. Therefore, I highly recommend to use “QUERY” only from the consumer at the very beginning at a transaction (e. g. on a selection screen or at the beginning of some batch-report). Especially within service provisioning, queries must not be used! The side effects of reading dirty while applying business logic are tricky to identify and mostly horrible to correct.

The core-service CONVERT_ALTERNATIVE_KEY is much less comfortable with respect how to identify instances of a node and needs more modeling, but it respects the transactional buffer! An alternative key in the sense of BOPF is an attribute of a node (or a combination of multiple attributes) which serves to identify an instance either exactly (usually an ID) or in order to identify a set of instances (usually a foreign key). A node may have one or more alternative keys. The definition in the business object comprises its structure as well as its multiplicity (uniqueness). In our sample, the monster number could be a unique alternative key while the creator could be a non-unique alternative key if there was a need to have business logic based on the selection by creator. If for example monsters have got a rental price and for all monsters of a creator the price shall be adjusted, we’d need an alternative key on the creator: Using a query would not find a monster which has been created within the same transaction.

The alternative key’s uniqueness can also be used for validating that no second instance with the same unique alternative key is getting created. In contrast to what Paul wrote, BOPF offers a re-use-feature which ensures the adequate uniqueness: Once you model an alternative key, you are requested to add an action validation (which we’ll cover in a later chapter) with implementation class /BOBF/CL_LIB_V_ALTERNATIVE_KEY. The SAP-provided implementation also ensures uniqueness across multiple sessions on non-persisted data!

Remark: Alternative keys are also necessary in order to be able to model associations between nodes of different business objects (Cross-BO-associations). In this case, the multiplicity of the association has to match the uniqueness of the alternative key.

Reading data

Alright, now we got a set of technical keys of instances which we’d like to process. There are two core-services for reading data: RETRIEVE gets the data of instances of which we know the KEYs. RETRIEVE_BY_ASSOCIATION– surprise, surprise – can retrieve instances (KEYs) of associated nodes. Optionally (not by default!), RETRIEVE_BY_ASSOCIATION also returns the data of the target instances. Both services allow the consumer to specify in which information of the node to be retrieved he’s interested in by specifying the it_requested_attributes. If one of the requested attributes is a calculated one (from the transient part of the node structure), BOPF will execute the corresponding calculation. If no requested attributes is specified, all node-attributes are considered requested. As your models grow (and they will, be sure) and transient information is added and calculated, the use of the requested attributes is getting more and more important. So even if you’re requesting all attributes of the currently modeled nodes, I recommend to specify the attributes which are relevant. This not only saves you nasty performance analysis in the future, but also helps to make your code more readable. Let me give you a short sample:

Monster_manager->retrieve(
EXPORTING
  iv_node = zif_monster_c=>sc_node-root
  it_key = relevant_monster_keys
  it_requested_attributes = VALUE #( zif_monster_c=>sc_node_atttribute-root-number_of_heads )
IMPORTING
  et_data = relevant_mosters
).

The above code implies that the number of heads is relevant for the business logic which is about to follow. Also note that a table of moster-keys is being fed into the method. In BOPF, all commands issued by the consumer are mass-enabled. This is particularly important for the retrieval-methods, as each read might result in a DB access (if the buffer is not being hit for all instances). It can scrutinize your system’s performance if you only feed single keys and read with index 1 and do this in a loop. I highly recommend to mass-read all the relevant data (including the necessary associated data) right in the beginning of the method. If you in addition properly fill the requested attributes, 80% of your performance tuning has already been taken care of.

The command for following an association looks very similar:

Monster_manager->retrieve_by_association(
EXPORTING
  iv_node = zif_monster_c=>sc_node-root
  it_key = relevant_monster_keys
* iv_fill_data = abap_true
* it_requested_attributes = VALUE #( zif_monster_c=>sc_node_atttribute-head-number_of_eyes )
IMPORTING
  et_key_link = link_root_head
* et_data = relevant_mosters_heads
).

A careful observer will see that the data of the target node is not always being returned when following an association. The runtime representation of an association is a link between the source and target node. The data is actually a property of the target node and not always necessary in order to implement the requested behavior. As the retrieval of the target node’s data is comparatively expensive (particularly if transient information is requested), the default is not to request the data (iv_fill_data). If you have managed to implement a real-world usecase without ever running into a short-dump because you forgot to set iv_fill_data = abap_true, you are certainly a more careful programmer than I am.

Modifying instances

After we read the current data of an instance, we might want to manipulate it. /BOBF/IF_TRA_SERVICE_MANAGER offers the core-service MODIFY which is a command to execute all kinds of manipulations (Create, Update, Delete). The modify command gets passed a set of modification instructions which might not only affect multiple instances, but also multiple nodes in one call. This is essential, as there might be business logic which validates whether an instance can be created based on subnode-data. E. g. we could validate that each monster needs to have a least one head. Creating a monster without a head would reject the modifications for the failed monster instance.

I will not go into the details of the command (but I recommend you to read the method documentation on the interface which will really help you, the BOPF documentation team did a great job there), but I’ll point you to some specialties.

When creating instances of multiple nodes of a composition, you need to make sure that the instances of the subnode are created for the proper parent-node-instance. In order to be able to do this, you need to know the KEY of the parent node instance. In this case, you can use /bobf/cl_frw_factory=>get_new_key( ) in order to define with which technical identifier the parent node instance shall be created. Else, as a consumer you don’t need to define the key, the framework will do that for you.

Once you update an instance, you can use the changed attributes in order to inform the framework which part of the instance have changed. This not only increases performance (as BOPF doesn’t have to compare the before- and target-data), but also allows you to have multiple modification instructions per instance affecting different attributes.

When deleting an instance, BOPF will implicitly delete the subnodes (via the compositions) as well. There is no need for an explicit deletion of the subnode-instances.

Change- and message-object

Each core-service returns a message container and a change object. It is crucial to understand that in a BOPF-application (such as it should be in any other well-designed application), messages are exclusively intended to be interpreted by a human. Business logic must never be based upon the existence of a particular message-attribute. For this, BOPF calculates a change-object after each roundtrip. This does not only inform about the difference in the transaction before and after the roundtrip, but also informs about failed changes. It may also be the case that during one roundtrip, multiple modifications are being made out of which some are successful and some fail (because they violated some constraint). Thus, if the has_failed_changes( )-method returns abap_true, you definitely have to analyze which change failed!

<-- Back to the first post about general modeling and the unnecessary model class

--> Next: Implicit services for locking authorization management

ABAP to the future – my version of the BOPF chapters – Part 3

$
0
0

8.2.3 Locking

The manipulating services we've been talking about in the previous chapter of course require the business object which we're accessing to be locked – precisely it's the BO node instance which requires to be exclusively available to the requesting session at this moment. The enqueueing and dequeuing as well as interpreting the lock-result is implicitly done by the framework at runtime. You as a developer don't really need to worry about it, but it is of good use to know how it works in general.

Each node is – theoretically– separately lockable. This means that multiple sessions can manipulate different subnode node instances of the same root instance. You can imagine a monster factory where one department attaches the heads and another one was responsible for the extremity-design while the management is responsible to define which monsters (root node instances) exist. If this was your business, you could simply model the head and the extremity node separately lockable and – if the UI permits a local edit – multiple persons could edit heads and extremities of the same monster. All the subnodes which are not separately lockable are being logically locked along with the next-level lockable parent. Assuming that FINGER is a subnode of EXTREMITY, but FINGER is not separately lockable, a consumer adding a finger to an extremity would require a lock on the extremity-instance.

By default, only the root-node is separately lockable.

8.2.4 Authorizations

Another technical thing which happens implicitly when interacting with a BO is authorization handling. This latest BOPF feature allows to model which authority object guards the interactions with a BO node. It is similar to locking with respect to the hierarchical interpretation of the modeled auth-objects: If a parent node has an authority object attached but the subnode does not, the authorizations of accessing the parent node are validated.

As soon as you defined that a BO shall have authority checks, you may define one or multiple auth-objects at a BO node. The auth-object kind of defines a set of nodes with the same target usergroup. If multiple objects are modeled, all authority-checks need to pass in order to grant the desired access. The authority field ACTVT has to be part of the auth-object definition and BOPF will determine the necessary value at runtime. In addition, BOPF will pass the action name on executing an action as well as the query name in the case of executing a query to the field BO_SERVICE in order to allow fine-grained roles (obviously, your security guy has much more to do maintaining the roles than the developer “implementing” the auth-validations). ACTVT and BO_SERVICE are compulsory for each auth-object-definition which is used within BOPF.

BOPF supports two different types of authority-checks out-of-the-box: Static and dynamic checks. When executing dynamic checks, additional authority fields which are checked based on the value of an attribute of a node instance exist. For example we could have the creator of a monster as an attribute relevant for authorizations. In our factory, multiple departments exist, each handling the monsters of a dedicated creator. Thus, at runtime, the value of creator needs to be read and the authority-check has to compare the instances value with the allowed value from the role-definition. BOPF does all this data-retrieval for you after you modeled the auth-attribute-mapping. It even supports the relevant attribute being located at an associated node: Assuming that heads were separately lockable as we said earlier, we could model that heads are validated with the same auth-object of the root with an wuthority-field-mapping based on the CREATOR which can be retrieved via the association TO_PARENT.

Funky, isn't it? And the best part of it: You don't need to write a single line of code in order to make this work, independent of which consumer you've got. And if your authorization logic is even more “sophisticated”, you are always free to define an own authorization class which implements your very custom authorization concept.


<-- Back to the post covering the basic consumption of a BO using CRUD services

Search help in Determination to NWBC field screen.

$
0
0

Hi Experts.

 

I created a field in NWBC screen and I want put a search help for this field.

Could I call a search help function inside my Determination for this field ?

 

For example when i want to block a field i use:

  lo_property->set_attribute_read_only(

              EXPORTING

               iv_attribute_name = 'ZPACKTYPE'

               iv_key            = ls_item-key

               iv_value          = abap_true ).


And it works.


But when I want input a search help with data selected in my validation and dont know how do it.


do you  know how to do it?Or  do you know  a page in sap TM guide that teach it.?



Thanks

ABAP to the future – my version of the BOPF chapters – Part 4: Determinations and general architectural aspects

$
0
0

8.2.5 – Determinations

Up to now, we’ve been interacting with a Business Object solely as a consumer. It’s now time to move to the other side and provide some business logic.

Determinations are implicitly executed data manipulation which the framework triggers on a modeled interaction with a business object. They can’t be explicitly invoked by a consumer and thus are comparable to non-public methods in a UML class model which are executed as some kind of side-effect. This side-effect should somehow be relevant to business logic (and not solely be derived in order to be presented on a UI). Whether the result is persisted or only transiently available until the end of the transaction does not matter to how the business logic is implemented.

The most important and sometime tricky decision you need to make is to which node the determination shall be assigned to. The answer is simple remembering that a BO node corresponds to a UML class: At the node which represents the entity at which you would implement the private method in a domain model. Well, this might not have helped you much if you’ve been more focused on coding instead of modeling yet, but I hope the next hint should help you more: At runtime, the instances of the assigned node are passed into the determination. So usually it makes sense to assign the determination at the node where the attributes reside which are going to be manipulated by the determination. Or more general: Choose the assigned node as the topmost node from which all information required within the determination is reachable. A sample further down should explain this aspect easily.

Three aspects are relevant to determinations: Which interaction makes the system trigger what business logic and when is this logic being executed. While the “what” is being coded in ABAP as a class implementing a determination interface, trigger and execution time can be modeled.

Triggers can be any of the CRUD-services which are requested at a BO node. A trigger for a determination can also be a CRUD-operation on a node which is associated to the assigned node (e. g. a subnode). In order to understand the options for the execution time it is essential to understand the phase-model of a BOPF-transaction and this would be a chapter on its own. Anyway, only set of combinations of triggers and execution times makes sense and SAP has thus enhanced the determination creation wizard (compared to the previous releases and the full-blown BOBF modeling environment): The wizard offers a selection of usecases for a determination:

Derive dependent data immediately after modification

Trigger: Create, update, delete of a node; execution time: “after modify”. Immediately after the interaction (during the roundtrip), the determination shall run. This is by far the most often required behavior. Even if no consumer might currently request this attribute (e. g. as it’s not shown on the UI), most calculated attributes shall be derived immediately, as other transactional behavior might depend on the calculation’s result.

Derive dependent data before saving

Trigger: Create, update, delete of a node; execution time: “before save (finalize)”. Each modification updates the current image of the data. However not all of these changes which might represent an intermediate state need to trigger a determination, but only the (consistent) state before saving is relevant to the business. Popular samples are the derivation of the last user who changed the instance, the expensive creation of a build (e. g. resolving a piece-list) or the interaction with a remote system.

Fill transient attributes of persistent nodes

Trigger: Retrieve, create, update of a node; execution times: “after loading” and “after modify”. Transient attributes need to be updated once the basis for the calculation changes as well as when reading the instance for the first time. Transient data in a BO node should be relevant to a business process. Texts are not. However, you could of course transiently derive a classification (A/B/C-Monsters) based on some funky statistical function of the current monster-base or from a ruleset. The texts (“A” := “very scary monter”) however should be added on the UI layer. Other samples for transient determinations which I have seen in real life: Totals, Count of subnodes, converted currencies, age (derived from a key-date, temporal information is tricky to persist , serialized form of other node-attribute.
In (almost) every case, you could as well persist the attribute and in many cases, a node attribute which was transient in the first step got persisted after some time (due to performance or since some user wanted to search for it). In this case, you simply need to change the configuration of the determination to “Derive dependent data immediately after modification”, but not change the implementation!

Create properties

Trigger: The core-service “retrieve_properties” on the assigned node; execution time “before retrieve”. Properties are bits (literally) of information which inform the consumer which interactions with parts of the instance are possible – if the consumer wants to know that! One usecase of properties is to mark node attributes as disabled, read-only or mandatory. But also actions can carry properties about being enabled or not. It is crucial to understand that properties will not prevent the consumer from performing an interaction which shall not be possible (e. g. changing a read-only or even disabled node attribute). Only validations (which are covered in the next chapter) have got this power. But often, validations and property-determinations share the same logic, so there’s good reason to extract this code in a separate method and use it from the property-determination- as well as from the validation-interface-implementation.

Derive instance of transient node

Trigger: The resolution of an association to the assigned node. Execution time: “before retrieve”. In BOPF it is also possible to create nodes which are fully transient – including their KEY. If a node is modeled as transient, this determination pattern is getting selectable. The implementation has to ensure that the KEY for the same instance is stable within the session. As this is a quite rare-usecase, I’ll not go into the details about it (we might have a sample in the actions chapter later on).

Determination dependencies are the only way to control in which order determinations are being executed in. If one determination depends on the result of a second one, the second determination is a predecessor of the first one. If you need lots of determination dependencies, this is an indicator for a flaw in either the determination- or the BO-design: This brings us to another question: What shall be the scope of a determination? There might be different responses to this question. I prefer to have one determination per isolated business aspect. If you for example derive a hat-size-code and a scariness-classification, they are semantically not interfering. Thus, I advise to create two determinations in this case, even if both are assigned to the same node, have got the same triggers and the same timepoint (after modify). You could argue that then, two times the same data (monster header) is being retrieved (in each of the determinations), but the second data retrieval will hit the buffer and thus has very limited impact on performance. The benefits are – imho – much bigger: Your model will be easier to read and to maintain (many small chunks which can also be unit-tested more easily). Also, it might be the case that throughout the lifecycle of your product, one aspect of the business logic changes and makes new triggers necessary (e. g. the scariness could be influenced by the existence of a head with multiple mouths in future). If you don’t separate the logic, your additional trigger would also make the business logic execute which is actually independent of it. I our sample, the determination of the scariness would have to be executed on CUD of a HEAD-instance while the hat-size-code still depends on changes of the ROOT.

Alright, with all this being said/written, let’s have a look at how to actually implement a determination. As we are getting close to the code, I will have to comment on the samples and advice given in the book. One major benefit using BOPF is that these styles are getting more and more alike since there are some patterns / commands which just make sense while others don’t

Disclaimer: I’m currently writing all this text including the code on my tablet, sometimes on the phone while my year-old son sleeps (sometimes on my chest as I write). There’s no code completion, not even a syntax check. Please bear with me if this is not compileable, I hope you’re able to get the meaning though…

Dependency inversion and the place for everything

First of all, I would like to address an aspect which Paul also pointed out (and which consists of two parts. “This example is a testimony to the phrase ‘A place for everything and everything in its place.’ Instead of lumping everything in one class, it’s better to have multiple independent units”. I could not more agree to that – and I could not contradict more to his conclusion drawn: “For that reason, this example keeps the determination logic in the model class itself and that logic gets called by the determination class”. With a BOPF model in place, this model is becoming “the place for everything”.
Even if the (BOPF BO) model is not represented by one big class-artifact or an instantiated domain-class at runtime, this model exists. I don’t think that when you model your business in BOPF, you are getting stuck on the current stack: The BOPF designtime is the tool with which this model is technically described, but the model exists also without BOPF. In natural language, I can easily describe aspects of my model as well:
“As soon as the hat-size of my monster changes, I want to calculate the hat-size-code”. Having a determination after modify with trigger CUD on the ROOT of monster is only a structurally documented form. As there is also an interface for reading this model, you can even think of compiling some other languages’ code based on the model.

Whatever technical representation you are choosing for your model (BOPF BO-model, GENIAL-component-representation or a plain ABAP domain class), it’s good style not to implement all behavior only in one single artifact (e. g. in methods of a class). Let’s stick to the sample of the two derivations given. In a plain ABAP-class, you could have methods defined similar to this:

METHOD derive_after_root_modification.

me->derive_hat_size( ).

me->classify_scariness( ).

ENDMETHOD. “derive_after_root_modification.

This is the straight-forward-approach, but ill will become clumsy as your models grow. Also, re-use is limited with respect to applying OO-patterns and techniques on the behavioral methods (e. g. using inheritance in order to reduce redundancy). Thus, I like the composite-pattern with which we’ll create small classes implementing the same interface:

INTERFACE zif_monster_derivation.

METHODS derive_dependent_stuff
IMPORTING
io_monster TYPE zcl_monster.

ENDINTERFACE.

METHOD derive_after_root_modification.

DATA lt_derivation TYPE STANDARD TABLE OF REF TO zif_monster_derivation WITH DEFAULT KEY.

INSERT NEW zcl_monster_hat_size_derivation INTO TABLE lt_derivation.

INSERT NEW zcl_monster_scariness_derivation INTO TABLE lt_derivation.

LOOP AT lt_derivation INTO DATA( lo_derivation ).

lo_derivation->derive_dependent_stuff(me).

ENDLOOP.

ENDMETHOD. “derive_after_root_modification.

Having applied this pattern, you are much more flexible when adding new business logic (or when deciding to execute the same logic at multiple points in time, for example). And you are much closer to the implementation pattern chosen in BOPF. The only difference being that you don’t need the model class (as I wrote previously). The instantiation of the framework for your BO at runtime will do exactly the same job.

So what about dependency inversion and the flexibility of your code if BOPF is not state-of-the-art anymore? It’s all in place already. Let’s have a look at the following sample implementation of the hat-size-derivation:

CLASS zcl_monster_hat_size_derivation DEFINTION.

INTERFACES /BOBF/IF_FRW_DETERMINATION.

PROTECTED SECTION.

METHODS get_hat_size
IMPORTING iv_hat_size TYPE zmonster_hat_size
RETURNING VALUE(rv_hat_size_code) TYPE zmonster_hat_size_code.

ENDCLASS.

METHOD get_hat_size.

hat size code is getting translated to its code-values by the UI-layer (If for example you use a drop-down-list-box in the FPM, the UI will automatically translate the code to its text if the domain is properly maintained with either fixed values or a value- and text-table).

IF iv_hat_size > 10.

rv_hat_size_code = gc_really_big_hat.

ELSEIF iv_hat_size > 5.

rv_hat_size_code = gc_big_hat.

ELSE rv_hat_size_code = gc_normal_hat.

ENDIF.

ENDMETHOD.

 

 

METHOD /BOBF/IF_FRW_DETERMINATION~EXECUTE.

DATA lt_head TYPE ZMONSTER_T_HEAD.

io_read->retrieve(
exporting
  iv_node = zif_monster_c=>sc_node-head
  it_key = it_key
  it_requested_attributes = value #( ( zif_monster_c=>sc_node_attribute-head-hat_size ) )
importing
et_data = lt_head
).

LOOP AT lt_head REFERENCE INTO DATA( lr_head ).

lr_head->hat_size_code = me->get_hat_size_code( lr_head->hat_size ).

io_modify->update(
iv_node = zif_monster_c=>sc_node-head
  iv_key = lr_head->key
  is_data = lr_head
  it_changed_attributes = value #( ( zif_monster_c=>sc_node_attribute-head-hat_size_code ) )
).

ENDLOOP.

ENDMETHOD.

Note that the signature of the actual business logic is absolutely independent of BOPF. The determination class simply offers an interface (literally) to the framework. If you switched to another framework, you could implement a second interface in which method implementation you also call the “business logic” (get_hat_size).

I sincerely hope I could address the concerns I’ve got about using a model class and that you also come to the conclusion, that with many atomic classes in place and the BOPF model described in the system, there is no need for a model-class. Why I’m so opposite to such an entity is a major flaw in the way this is usually being used and which brings a terrifying performance penalty. We’ll come to that in the next paragraphs.

The determination interface methods

Paul has explained the purposes of the interface methods nicely in “ABAP to the future”. You can also have a look at the interface-documentation in your system. As far as I remember it’s quite extensive. Above I wrote that with BOPF in place, the implementations are getting harmonized within a development team. I would therefore like to explain the basic skeletons and DOs and DON’Ts within the implementation of those methods.

Checking for relevant changes

METHOD /BOBF/IF_FRW_DETERMINATION~CHECK_DELTA.

First, compare the previous and the current (image-to-be) of the instances which have changed. Note that this is a comparatively expensive operation.

io_read->compare(
exporting
  iv_node_key = zif_monster_c=>sc_node-head
it_key = ct_key
  iv_fill_attributes = abap_true
importing eo_change = lo_change
).

* IF lo_change->has_changes = abap_true. … “This is unnecessary, as we’ll get only instances passed in ct_key which have changed.

* io_read->retrieve( … ) – this is usually not necessary in check_delta, as we’re only looking for the change, not for the current values (this, we’ll do in “check”)

lo_change->get_changes( IMPORTING et_change = DATA( lt_change ) ).

LOOP AT ct_key INTO DATA( ls_key ). “Usually the last step in check and check-delta: Have a look at all the instances which changed and sort those out which don’t have at least one attribute changed upon which our business logic depends on. Note, that determinations are mass-enabled. If you see INDEX 1 somewhere in the code, this is most probably a sever error or at least performance penalty!

READ TABLE lt_change ASSIGNING FIELD-SYMBOL( <ls_instance_changes> ) WITH KEY key_sorted COMPONENTS key = ls_key-key.

* CHECK sy-subrc = 0. “This is not necessary as the instance got passed to the determination as it has changed (assuming that the trigger was the assigned node of course). If you want to program so defensively that you don’t trust the framework fulfilling its own contract, use ASSERT sy-subrc = 0.
READ TABLE <ls_instance_change>-attributes TRANSPORTING NO FIELDS WITH KEY table_line = zif_monster_c=>sc_node_attribute-head-hat_size.

IF sy-subrc <> 0.

* The determination-relevant attribute did not change -> exclude this instance from further processing

DELETE ct_key.

ENDIF.

ENDLOOP.

ENDMETHOD.

Checking for relevant values

METHOD /BOBF/IF_FRW_DETERMINATION~CHECK.

* Get the current state (precisely the target state to which the modification will lead) of all the instances which have changed.

DATA lt_head TYPE ZMONSTER_T_HEAD. “The combined table type of the node to be retrieved

io_read->retrieve(
exporting
  iv_node = zif_monster_c=>sc_node-head
it_key = ct_key
  it_requested_attributes = VALUE #( (zif_monster_c=>sc_node_attribute-head-wears_A_hat ) )
importing
  it_data = lt_head ).

LOOP AT lt_head ASSIGNING FIELD-SYMBOL( <ls_head> ). “You could also very well loop at ct_key, in order to make sure you process every instance. This makes sense if you don’t retrieve all the instances in the first step.

IF <ls_head>-wears_a_hat = abap_true. “check the content of some attribute of the node which makes the derivation logic unnecessary.

DELETE ct_key WHERE key = <ls_head>-key. “exclude the instance from further processing

ENDIF.

ENDLOOP.

ENDMETHOD.

Executing the actual calculation

METHOD /BOBF/IF_FRW_DETERMINATION~EXECUTE.

DATA lt_head TYPE ZMONSTER_T_HEAD.

io_read->retrieve(
exporting
  iv_node = zif_monster_c=>sc_node-head
  it_key = it_key
  it_requested_attributes = value #( ( zif_monster_c=>sc_node_attribute-head-hat_size ) )
importing
et_data = lt_head
). “mass-retrieval of all the (potentially associated) information upon which the determination logic depends on. A retrieve (particularly revieves via an association) might result in a SELECT from the database. Thus, it is key for performance not to do this in a loop, but mass-enabled in the beginning of the method.

LOOP AT lt_head REFERENCE INTO DATA( lr_head ). “looping ‘REFERENCE INTO’ has the benefit that the data reference can be directly used for the modification. You could also very well loop at it_key, in order to make sure you process every instance. This makes sense if you don’t retrieve all the instances in the first step.

lr_head->hat_size_code = me->get_hat_size_code( lr_head->hat_size ).

io_modify->update(
iv_node = zif_monster_c=>sc_node-head
  iv_key = lr_head->key
  is_data = lr_head
  it_changed_attributes = value #( ( zif_monster_c=>sc_node_attribute-head-hat_size_code ) ) ). “The modify-handler buffers the change-instructions of the command issued. These changes are flushed to the buffer by the end of roundtrip. Therefore, it's no performance-penalty to use the create/update/delete-methods of the modify-interface

ENDLOOP.

ENDMETHOD.

So far, so good. I hope you agree that the command pattern has the benefit of being very verbose in combination with the constant interface. Coding for example
io_modify->update(
iv_node = zif_monster_c=>sc_node-head
  iv_key = lr_head->key
  is_data = lr_head
  it_changed_attributes = value #( ( zif_monster_c=>sc_node_attribute-head-hat_size_code ) ) )

is in my eyes very close to writing a comment “Update the hat-size-code of the monster’s head”.

Architectural aspects

Some final words on why I don’t like to delegate further from a determination class to a model-class. What is so “wrong” about lo_monster_model = zcl_monster_model=>get_instance( ls_monster_header-monster_number)?

There are some things which may happen delegating to an instance of a model-class which absolutely contradict the BOPF architectural paradigms which all arise due to the conflict of a service-layer-pattern used in BOPF (you talk to a service providing the instances you are operating with) versus a domain-model-pattern common in Java and alike (each instance of the class represents an instance of a real-world-object):

  • Own state
    In BOPF, the state is kept in the buffer class of each BO node. This buffer is accessed by the framework. Based on the notifications of this buffer, the transactional changes are calculated and propagated to the consumer. This is not possible if other non-BOPF-buffers exist. But actually, this is the paradigm of a domain-model: Each instance shall hold the state of the real-world-representation. So what to do? Whenever you implement business logic in BOPF, the actual logic needs to be implemented stateless. There must not be any member accessed, neither at a third-party-model-class nor at the determination class itself!
  • Reduction of database access
    Considering the latency of the various memories, DB access is one thing which really kills performance. Thus, BOPF tries to reduce the number of DB interaction: A transparent buffer based on internal tables exists for each node and all interfaces are mass-enabled which allows to have a SELECT INTO TABLE instead of a SELECT SINGLE. When using a domain model pattern, the factory needs to provide a mass-enabled method in order to achieve the same (which I have rarely seen). Also, as BOPF has already read the data from the DB, the factory should also allow to instantiate with data (and not only with a key). The code samples in the book also imply, that within the
    get_instance( monster_number ), a query is being used in order to translate the semantical into the technical key. As the query always disregards the transactional buffer, not only an unnecessary data access is being made: the instance could not be created for a monster which has just been created.
  • Lazy loading
    Usually, if you create a BOPF model, each BO node has its own database table with the technical KEY being the primary key of the database. If this is the case, each node can (and shall) be separately loadable. This means that all subnodes of a node are only being read from the DB if the node is explicitly requested (either by a direct retrieve or most likely with a retrieve by association from the parent node). Using a domain model, you also have to implement this kind of lazy loading which is a bit tricky and honestly, I have not seen in in action properly yet.
  • Mass-enabling
    As written above, BOPF minimizes the number of DB accesses. But also thinking about ABAP, it is optimized for performance. Data redundancy (and copying) is minimized by transporting data references (to the buffer) through the interface-method-signatures. Furthermore, it uses and enforces modern ABAP performance tweaks such as secondary keys on internal tables. Last but not least ABAP can handle internal tables of structured data very well, while the instantiation of ABAP classes is comparatively expensive.
  • Dependency injection
    As you probably noticed, BOPF injects the io_read and io_modify-accessors into the interface-methods. This not only ensures that the proper responsibilities are being adhered to (e. g. a validation, which shall opnly perform checks does not get a chance to change data, as there’s no io_modify), but it also simplifies mocking when it comes to unit-testing.

I hope you can now share my enthusiasm about the architectural patterns used in BOPF and may understand my skepticism about a “model-class".

<-- Back: Locking and authorization management


Error while creating location - BO - EHFND_LOCATIONS

$
0
0

Hi Experts,

 

Trying to create locations using BO - EHFND_LOCATIONS, but getting short dump, below is the code, could some one help me what I am missing.

 

  *--- Class object types

**DATA : lo_service TYPE REF TO /bobf/if_tra_service_manager,

**       lo_trans   TYPE REF TO /bobf/if_tra_transaction_mgr.

**

**DATA : lt_mod     TYPE /bobf/t_frw_modification,

**       ls_mod     TYPE /bobf/s_frw_modification.

**

**FIELD-SYMBOLS : <fs_location_revision> TYPE EHFNDS_LOC_REVISION.

 

DATA : lo_trans TYPE REF TO /bobf/if_tra_transaction_mgr,

        lo_serv  TYPE REF TO /bobf/if_tra_service_manager.

 

DATA : lt_mod   TYPE /bobf/t_frw_modification,

        ls_mod   TYPE /bobf/s_frw_modification.

 

DATA : lv_locid TYPE nrfrom.

 

*DATA : lr_root TYPE REF TO EHFNDS_LOC_ROOT,

*       lr_rev  TYPE REF TO data,

*       lr_text TYPE REF TO EHFNDS_LOC_REVISION_NAME_TEXT.

 

FIELD-SYMBOLS : <fs_location> TYPE ehfnds_loc_revision,

                 <fs_locroot>  TYPE ehfnds_loc_root,

                 <fs_locdesc>  TYPE ehfnds_loc_revision_name_text.

 

*--- Get instances of API

lo_trans = /bobf/cl_tra_trans_mgr_factory=>get_transaction_manager( ).

 

lo_serv  = /bobf/cl_tra_serv_mgr_factory=>get_service_manager(

                                           if_ehfnd_loc_c=>sc_bo_key ).

*--- Populate Root node

 

CALL FUNCTION 'NUMBER_GET_NEXT'

   EXPORTING

     nr_range_nr   = 'IE'

     object        = 'EHFNDLCNID'

   IMPORTING

     number        = lv_locid.

IF sy-subrc <> 0.

* Implement suitable error handling here

ENDIF.

 

 

ls_mod-node = if_ehfnd_loc_c=>sc_node-root.

ls_mod-change_mode = /bobf/if_frw_c=>sc_modify_create.

ls_mod-key   = /bobf/cl_frw_factory=>get_new_key( ).

*

CREATE DATA ls_mod-data TYPE ehfnds_loc_root.

ASSIGN      ls_mod-data->* TO <fs_locroot>.

 

<fs_locroot>-key        = ls_mod-key.

<fs_locroot>-id         = lv_locid.

 

APPEND ls_mod TO lt_mod.

CLEAR ls_mod.

 

*--- Revision node

ls_mod-node = if_ehfnd_loc_c=>sc_node-revision.

ls_mod-change_mode = /bobf/if_frw_c=>sc_modify_create.

ls_mod-key   = /bobf/cl_frw_factory=>get_new_key( ).

*

CREATE DATA ls_mod-data TYPE ehfnds_loc_revision.

ASSIGN      ls_mod-data->* TO <fs_location>.

*

<fs_location>-key = ls_mod-key.

<fs_location>-type = 'LOCATION'.

*<fs_location>-status = '02'.

<fs_location>-funct_loc_id = 'TEST_MIGRATION'.

 

APPEND ls_mod TO lt_mod.

CLEAR ls_mod.

 

*--- Revision text node

*--- Revision node

ls_mod-node = if_ehfnd_loc_c=>sc_node-revision_name_text.

ls_mod-change_mode = /bobf/if_frw_c=>sc_modify_create.

ls_mod-key   = /bobf/cl_frw_factory=>get_new_key( ).

*

CREATE DATA ls_mod-data TYPE ehfnds_loc_revision_name_text.

ASSIGN      ls_mod-data->* TO <fs_locdesc>.

*

<fs_locdesc>-key = ls_mod-key.

<fs_locdesc>-text = 'Test migration text'.

*<fs_locdesc>-status =

 

APPEND ls_mod TO lt_mod.

CLEAR ls_mod.

 

*--- call modify

lo_serv->modify(

   EXPORTING

     it_modification = lt_mod

*   IMPORTING

*     eo_change       =

*     eo_message      =

        ).

 

lo_trans->save(

*   EXPORTING

*     iv_transaction_pattern = /BOBF/IF_TRA_C=>GC_TP_SAVE_AND_CONTINUE

*   IMPORTING

*     ev_rejected            =

*     eo_change              =

*     eo_message             =

*     et_rejecting_bo_key    =

        ).

Delete Root

$
0
0

Hi experts

Im trying to delete a root :

 

    DATA: ls_sel_opt TYPE /bobf/s_frw_query_selparam,

          lt_sel_opt TYPE /bobf/t_frw_query_selparam.

 

 

    ls_sel_opt-attribute_name = /scmtms/if_tor_c=>sc_query_attribute-root-planning_attributes-tor_id.

    ls_sel_opt-sign  = 'I'.

    ls_sel_opt-option = 'EQ'.

    ls_sel_opt-low    = p_custid.

    ls_sel_opt-high  = ''.

    INSERT ls_sel_opt INTO TABLE lt_sel_opt.

 

    DATA lo_change        TYPE REF TO /bobf/if_tra_change.

    DATA lo_message        TYPE REF TO /bobf/if_frw_message.

    DATA lt_failed_key    TYPE /bobf/t_frw_key.

    DATA: lt_failed_act_key TYPE /bobf/t_frw_key,

          lo_srvmgr        TYPE REF TO /bobf/if_tra_service_manager.

    DATA: ls_parameters  TYPE /scmtms/tor_id,

          lr_s_parameters TYPE REF TO data,

          lx_frw          TYPE REF TO /bobf/cx_frw.

 

    DATA: lo_srv_mgr TYPE REF TO /bobf/if_tra_service_manager,

          lt_tor_key TYPE /bobf/t_frw_key.

* Get an instance of a service manager for e.g. BO TRQ

    lo_srv_mgr = /bobf/cl_tra_serv_mgr_factory=>get_service_manager( /scmtms/if_tor_c=>sc_bo_key ).

 

 

* Query business object - KEY

    CALL METHOD lo_srv_mgr->query

      EXPORTING

        iv_query_key            = /scmtms/if_tor_c=>sc_query-root-planning_attributes

        it_selection_parameters = lt_sel_opt

      IMPORTING

        et_key                  = lt_tor_key.

 

*    CREATE DATA lr_s_parameters.

* Carry out check

*      lr_s_parameters->delete_root = 'X'.

 

    CALL METHOD lo_srv_mgr->do_action

      EXPORTING

        iv_act_key          = /scmtms/if_tor_c=>sc_action-root-delete_root

        it_key              = lt_tor_key

        is_parameters        = lr_s_parameters

      IMPORTING

        eo_change            = lo_change

        eo_message          = lo_message

        et_failed_key        = lt_failed_key

        et_failed_action_key = lt_failed_act_key.

 

 

 

is ocurring dump but i dont know why

could you helpme?

the dump is attached.

Add Freight unit to Freight Order .

$
0
0

Hi guys .

I need to assign the resource and FU to a newly created FO.

I try to use ADD_FU_BY_FUID. but dump is occurred.

i read this document where says:

Transportation Management Missing Functions/User expectations - Freight Order Management

 

 

The document says:

Freight unit cannot be added to Freight Order via BOPF

 

You cannot add a Freight Unit via action ADD_FU_BY_FUID to Freight Order because a dump is occured. The standard way is to add FUs to FROs on the UI. The Test tool used for business is not supported.

 

How can i do it in my code? .please help me my requeirement is urgent.

 

 

Please see my code attached.

BOPF Lock a node

$
0
0

I have a sub-issue of a QIM issue that i am trying to lock in order to create a description node attached to the sub-issue root node. My modify call is returning an error because the sub-issue node is not locked. How can i lock a node? Any help would be greatly appreciated. Thank you.

BOPF - Compare information between two screens

$
0
0

Hi!

 

I wonder if anyone knows how to compare information between BOPF screens.

 

I'm trying to compare information between two screens, example:

Within the NWBC - Forwarding Order Management - Overview Forwarding Order, when we open any order, we have the tabs General Data and Business Partner, I would pick up a General Data information and compare within the Business Partner.

 

 

Regards,

Felipe

Viewing all 309 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>